00:00:00.000 Started by upstream project "autotest-per-patch" build number 132721 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.112 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.113 The recommended git tool is: git 00:00:00.113 using credential 00000000-0000-0000-0000-000000000002 00:00:00.116 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.159 Fetching changes from the remote Git repository 00:00:00.163 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.217 Using shallow fetch with depth 1 00:00:00.217 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.217 > git --version # timeout=10 00:00:00.256 > git --version # 'git version 2.39.2' 00:00:00.256 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.289 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.289 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.560 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.572 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.585 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.585 > git config core.sparsecheckout # timeout=10 00:00:06.597 > git read-tree -mu HEAD # timeout=10 00:00:06.613 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.637 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.637 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.718 [Pipeline] Start of Pipeline 00:00:06.729 [Pipeline] library 00:00:06.730 Loading library shm_lib@master 00:00:06.731 Library shm_lib@master is cached. Copying from home. 00:00:06.747 [Pipeline] node 00:00:06.771 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.773 [Pipeline] { 00:00:06.781 [Pipeline] catchError 00:00:06.782 [Pipeline] { 00:00:06.791 [Pipeline] wrap 00:00:06.798 [Pipeline] { 00:00:06.803 [Pipeline] stage 00:00:06.805 [Pipeline] { (Prologue) 00:00:07.160 [Pipeline] sh 00:00:07.447 + logger -p user.info -t JENKINS-CI 00:00:07.465 [Pipeline] echo 00:00:07.467 Node: CYP9 00:00:07.474 [Pipeline] sh 00:00:07.774 [Pipeline] setCustomBuildProperty 00:00:07.783 [Pipeline] echo 00:00:07.784 Cleanup processes 00:00:07.788 [Pipeline] sh 00:00:08.072 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.072 1819929 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.085 [Pipeline] sh 00:00:08.374 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.374 ++ grep -v 'sudo pgrep' 00:00:08.374 ++ awk '{print $1}' 00:00:08.374 + sudo kill -9 00:00:08.374 + true 00:00:08.388 [Pipeline] cleanWs 00:00:08.397 [WS-CLEANUP] Deleting project workspace... 00:00:08.397 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.403 [WS-CLEANUP] done 00:00:08.406 [Pipeline] setCustomBuildProperty 00:00:08.416 [Pipeline] sh 00:00:08.698 + sudo git config --global --replace-all safe.directory '*' 00:00:08.783 [Pipeline] httpRequest 00:00:09.151 [Pipeline] echo 00:00:09.152 Sorcerer 10.211.164.20 is alive 00:00:09.161 [Pipeline] retry 00:00:09.163 [Pipeline] { 00:00:09.176 [Pipeline] httpRequest 00:00:09.180 HttpMethod: GET 00:00:09.181 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.181 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.199 Response Code: HTTP/1.1 200 OK 00:00:09.199 Success: Status code 200 is in the accepted range: 200,404 00:00:09.200 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.816 [Pipeline] } 00:00:13.833 [Pipeline] // retry 00:00:13.842 [Pipeline] sh 00:00:14.132 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.150 [Pipeline] httpRequest 00:00:14.816 [Pipeline] echo 00:00:14.818 Sorcerer 10.211.164.20 is alive 00:00:14.828 [Pipeline] retry 00:00:14.830 [Pipeline] { 00:00:14.845 [Pipeline] httpRequest 00:00:14.850 HttpMethod: GET 00:00:14.851 URL: http://10.211.164.20/packages/spdk_b82e5bf0317e5c8c6f86fc0673571d5613d82113.tar.gz 00:00:14.852 Sending request to url: http://10.211.164.20/packages/spdk_b82e5bf0317e5c8c6f86fc0673571d5613d82113.tar.gz 00:00:14.877 Response Code: HTTP/1.1 200 OK 00:00:14.877 Success: Status code 200 is in the accepted range: 200,404 00:00:14.878 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b82e5bf0317e5c8c6f86fc0673571d5613d82113.tar.gz 00:01:03.832 [Pipeline] } 00:01:03.847 [Pipeline] // retry 00:01:03.853 [Pipeline] sh 00:01:04.138 + tar --no-same-owner -xf spdk_b82e5bf0317e5c8c6f86fc0673571d5613d82113.tar.gz 00:01:07.455 [Pipeline] sh 00:01:07.749 + git -C spdk log --oneline -n5 00:01:07.749 b82e5bf03 bdev/compress: Simplify split logic for unmap operation 00:01:07.750 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:01:07.750 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:01:07.750 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:01:07.750 e2dfdf06c accel/mlx5: Register post_poller handler 00:01:07.763 [Pipeline] } 00:01:07.781 [Pipeline] // stage 00:01:07.794 [Pipeline] stage 00:01:07.797 [Pipeline] { (Prepare) 00:01:07.816 [Pipeline] writeFile 00:01:07.834 [Pipeline] sh 00:01:08.122 + logger -p user.info -t JENKINS-CI 00:01:08.135 [Pipeline] sh 00:01:08.482 + logger -p user.info -t JENKINS-CI 00:01:08.524 [Pipeline] sh 00:01:08.811 + cat autorun-spdk.conf 00:01:08.811 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.811 SPDK_TEST_NVMF=1 00:01:08.811 SPDK_TEST_NVME_CLI=1 00:01:08.811 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.811 SPDK_TEST_NVMF_NICS=e810 00:01:08.811 SPDK_TEST_VFIOUSER=1 00:01:08.811 SPDK_RUN_UBSAN=1 00:01:08.811 NET_TYPE=phy 00:01:08.819 RUN_NIGHTLY=0 00:01:08.823 [Pipeline] readFile 00:01:08.846 [Pipeline] withEnv 00:01:08.848 [Pipeline] { 00:01:08.859 [Pipeline] sh 00:01:09.147 + set -ex 00:01:09.147 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:09.147 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.147 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.147 ++ SPDK_TEST_NVMF=1 00:01:09.147 ++ SPDK_TEST_NVME_CLI=1 00:01:09.147 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.147 ++ SPDK_TEST_NVMF_NICS=e810 00:01:09.147 ++ SPDK_TEST_VFIOUSER=1 00:01:09.147 ++ SPDK_RUN_UBSAN=1 00:01:09.147 ++ NET_TYPE=phy 00:01:09.147 ++ RUN_NIGHTLY=0 00:01:09.147 + case $SPDK_TEST_NVMF_NICS in 00:01:09.147 + DRIVERS=ice 00:01:09.147 + [[ tcp == \r\d\m\a ]] 00:01:09.147 + [[ -n ice ]] 00:01:09.147 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:09.147 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:09.147 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:09.147 rmmod: ERROR: Module irdma is not currently loaded 00:01:09.147 rmmod: ERROR: Module i40iw is not currently loaded 00:01:09.147 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:09.147 + true 00:01:09.147 + for D in $DRIVERS 00:01:09.147 + sudo modprobe ice 00:01:09.147 + exit 0 00:01:09.158 [Pipeline] } 00:01:09.173 [Pipeline] // withEnv 00:01:09.178 [Pipeline] } 00:01:09.192 [Pipeline] // stage 00:01:09.203 [Pipeline] catchError 00:01:09.205 [Pipeline] { 00:01:09.219 [Pipeline] timeout 00:01:09.220 Timeout set to expire in 1 hr 0 min 00:01:09.221 [Pipeline] { 00:01:09.236 [Pipeline] stage 00:01:09.238 [Pipeline] { (Tests) 00:01:09.253 [Pipeline] sh 00:01:09.543 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.543 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.543 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.543 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:09.543 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.543 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.543 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:09.543 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.543 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:09.543 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:09.543 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:09.543 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:09.543 + source /etc/os-release 00:01:09.543 ++ NAME='Fedora Linux' 00:01:09.543 ++ VERSION='39 (Cloud Edition)' 00:01:09.543 ++ ID=fedora 00:01:09.543 ++ VERSION_ID=39 00:01:09.543 ++ VERSION_CODENAME= 00:01:09.543 ++ PLATFORM_ID=platform:f39 00:01:09.543 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:09.543 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:09.543 ++ LOGO=fedora-logo-icon 00:01:09.543 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:09.543 ++ HOME_URL=https://fedoraproject.org/ 00:01:09.543 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:09.543 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:09.543 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:09.543 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:09.543 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:09.543 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:09.543 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:09.543 ++ SUPPORT_END=2024-11-12 00:01:09.543 ++ VARIANT='Cloud Edition' 00:01:09.543 ++ VARIANT_ID=cloud 00:01:09.543 + uname -a 00:01:09.543 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:09.543 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:12.846 Hugepages 00:01:12.846 node hugesize free / total 00:01:12.846 node0 1048576kB 0 / 0 00:01:12.846 node0 2048kB 0 / 0 00:01:12.846 node1 1048576kB 0 / 0 00:01:12.846 node1 2048kB 0 / 0 00:01:12.846 00:01:12.846 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:12.846 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:12.846 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:12.846 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:12.846 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:12.846 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:12.846 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:12.846 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:12.846 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:12.846 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:12.846 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:12.846 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:12.846 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:12.846 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:12.846 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:12.846 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:12.846 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:12.846 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:12.846 + rm -f /tmp/spdk-ld-path 00:01:12.846 + source autorun-spdk.conf 00:01:12.846 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.846 ++ SPDK_TEST_NVMF=1 00:01:12.846 ++ SPDK_TEST_NVME_CLI=1 00:01:12.846 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.846 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.846 ++ SPDK_TEST_VFIOUSER=1 00:01:12.846 ++ SPDK_RUN_UBSAN=1 00:01:12.846 ++ NET_TYPE=phy 00:01:12.846 ++ RUN_NIGHTLY=0 00:01:12.846 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:12.846 + [[ -n '' ]] 00:01:12.846 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.846 + for M in /var/spdk/build-*-manifest.txt 00:01:12.846 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:12.847 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:12.847 + for M in /var/spdk/build-*-manifest.txt 00:01:12.847 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:12.847 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:12.847 + for M in /var/spdk/build-*-manifest.txt 00:01:12.847 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:12.847 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:12.847 ++ uname 00:01:12.847 + [[ Linux == \L\i\n\u\x ]] 00:01:12.847 + sudo dmesg -T 00:01:12.847 + sudo dmesg --clear 00:01:12.847 + dmesg_pid=1820907 00:01:12.847 + [[ Fedora Linux == FreeBSD ]] 00:01:12.847 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.847 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.847 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:12.847 + [[ -x /usr/src/fio-static/fio ]] 00:01:12.847 + export FIO_BIN=/usr/src/fio-static/fio 00:01:12.847 + FIO_BIN=/usr/src/fio-static/fio 00:01:12.847 + sudo dmesg -Tw 00:01:12.847 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:12.847 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:12.847 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:12.847 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.847 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.847 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:12.847 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.847 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.847 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.847 13:08:59 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:12.847 13:08:59 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.847 13:08:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.847 13:08:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:12.847 13:08:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:12.847 13:08:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.847 13:08:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:12.847 13:08:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:12.847 13:08:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:12.847 13:08:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:12.847 13:08:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:12.847 13:08:59 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:12.847 13:08:59 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.108 13:08:59 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:13.108 13:08:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:13.108 13:08:59 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:13.108 13:08:59 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.108 13:08:59 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.108 13:08:59 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.108 13:08:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.108 13:08:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.108 13:08:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.108 13:08:59 -- paths/export.sh@5 -- $ export PATH 00:01:13.108 13:08:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.108 13:08:59 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:13.108 13:08:59 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:13.108 13:08:59 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733486939.XXXXXX 00:01:13.108 13:08:59 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733486939.QCW4nL 00:01:13.108 13:08:59 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:13.108 13:08:59 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:13.108 13:08:59 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:13.108 13:08:59 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.109 13:08:59 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.109 13:08:59 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:13.109 13:08:59 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:13.109 13:08:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.109 13:08:59 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:13.109 13:08:59 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:13.109 13:08:59 -- pm/common@17 -- $ local monitor 00:01:13.109 13:08:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.109 13:08:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.109 13:08:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.109 13:08:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.109 13:08:59 -- pm/common@21 -- $ date +%s 00:01:13.109 13:08:59 -- pm/common@25 -- $ sleep 1 00:01:13.109 13:08:59 -- pm/common@21 -- $ date +%s 00:01:13.109 13:08:59 -- pm/common@21 -- $ date +%s 00:01:13.109 13:08:59 -- pm/common@21 -- $ date +%s 00:01:13.109 13:08:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733486939 00:01:13.109 13:08:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733486939 00:01:13.109 13:08:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733486939 00:01:13.109 13:08:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733486939 00:01:13.109 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733486939_collect-cpu-load.pm.log 00:01:13.109 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733486939_collect-vmstat.pm.log 00:01:13.109 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733486939_collect-cpu-temp.pm.log 00:01:13.109 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733486939_collect-bmc-pm.bmc.pm.log 00:01:14.055 13:09:00 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:14.055 13:09:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.055 13:09:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.055 13:09:00 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.055 13:09:00 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.055 Fri Dec 6 12:09:00 PM UTC 2024 00:01:14.055 13:09:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.055 v25.01-pre-304-gb82e5bf03 00:01:14.055 13:09:00 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:14.055 13:09:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.055 13:09:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.055 13:09:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:14.055 13:09:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:14.055 13:09:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.055 ************************************ 00:01:14.055 START TEST ubsan 00:01:14.055 ************************************ 00:01:14.055 13:09:00 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:14.055 using ubsan 00:01:14.055 00:01:14.055 real 0m0.001s 00:01:14.055 user 0m0.000s 00:01:14.055 sys 0m0.000s 00:01:14.055 13:09:00 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:14.055 13:09:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.055 ************************************ 00:01:14.055 END TEST ubsan 00:01:14.055 ************************************ 00:01:14.055 13:09:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.055 13:09:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.055 13:09:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.055 13:09:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.055 13:09:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.055 13:09:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.055 13:09:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.317 13:09:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:14.317 13:09:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:14.317 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:14.317 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:14.890 Using 'verbs' RDMA provider 00:01:30.377 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:42.619 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:43.454 Creating mk/config.mk...done. 00:01:43.454 Creating mk/cc.flags.mk...done. 00:01:43.454 Type 'make' to build. 00:01:43.454 13:09:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:43.454 13:09:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:43.454 13:09:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:43.454 13:09:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.454 ************************************ 00:01:43.454 START TEST make 00:01:43.454 ************************************ 00:01:43.454 13:09:29 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:44.027 make[1]: Nothing to be done for 'all'. 00:01:45.413 The Meson build system 00:01:45.413 Version: 1.5.0 00:01:45.413 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:45.413 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.413 Build type: native build 00:01:45.413 Project name: libvfio-user 00:01:45.413 Project version: 0.0.1 00:01:45.413 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:45.413 C linker for the host machine: cc ld.bfd 2.40-14 00:01:45.413 Host machine cpu family: x86_64 00:01:45.413 Host machine cpu: x86_64 00:01:45.413 Run-time dependency threads found: YES 00:01:45.413 Library dl found: YES 00:01:45.413 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:45.413 Run-time dependency json-c found: YES 0.17 00:01:45.413 Run-time dependency cmocka found: YES 1.1.7 00:01:45.413 Program pytest-3 found: NO 00:01:45.413 Program flake8 found: NO 00:01:45.413 Program misspell-fixer found: NO 00:01:45.413 Program restructuredtext-lint found: NO 00:01:45.413 Program valgrind found: YES (/usr/bin/valgrind) 00:01:45.413 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.413 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.413 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.413 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:45.413 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:45.413 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:45.413 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:45.413 Build targets in project: 8 00:01:45.413 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:45.413 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:45.413 00:01:45.413 libvfio-user 0.0.1 00:01:45.413 00:01:45.413 User defined options 00:01:45.413 buildtype : debug 00:01:45.413 default_library: shared 00:01:45.413 libdir : /usr/local/lib 00:01:45.413 00:01:45.413 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.983 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:45.983 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:45.983 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:45.983 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:45.983 [4/37] Compiling C object samples/null.p/null.c.o 00:01:45.983 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:45.983 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:45.983 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:45.983 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:45.983 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:45.983 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:45.983 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:45.983 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:45.983 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:45.983 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:45.983 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:45.983 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:45.983 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:45.983 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:45.983 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:45.983 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:45.983 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:45.983 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:45.983 [23/37] Compiling C object samples/server.p/server.c.o 00:01:45.983 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:45.983 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:45.983 [26/37] Compiling C object samples/client.p/client.c.o 00:01:45.983 [27/37] Linking target samples/client 00:01:46.244 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:46.244 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:46.244 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:46.244 [31/37] Linking target test/unit_tests 00:01:46.244 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:46.506 [33/37] Linking target samples/gpio-pci-idio-16 00:01:46.506 [34/37] Linking target samples/lspci 00:01:46.506 [35/37] Linking target samples/server 00:01:46.506 [36/37] Linking target samples/null 00:01:46.506 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:46.506 INFO: autodetecting backend as ninja 00:01:46.506 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.506 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:46.766 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:46.766 ninja: no work to do. 00:01:53.359 The Meson build system 00:01:53.359 Version: 1.5.0 00:01:53.359 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:53.359 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:53.359 Build type: native build 00:01:53.359 Program cat found: YES (/usr/bin/cat) 00:01:53.359 Project name: DPDK 00:01:53.359 Project version: 24.03.0 00:01:53.359 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:53.359 C linker for the host machine: cc ld.bfd 2.40-14 00:01:53.359 Host machine cpu family: x86_64 00:01:53.359 Host machine cpu: x86_64 00:01:53.359 Message: ## Building in Developer Mode ## 00:01:53.359 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:53.359 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:53.359 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:53.359 Program python3 found: YES (/usr/bin/python3) 00:01:53.359 Program cat found: YES (/usr/bin/cat) 00:01:53.359 Compiler for C supports arguments -march=native: YES 00:01:53.359 Checking for size of "void *" : 8 00:01:53.359 Checking for size of "void *" : 8 (cached) 00:01:53.359 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:53.359 Library m found: YES 00:01:53.359 Library numa found: YES 00:01:53.359 Has header "numaif.h" : YES 00:01:53.359 Library fdt found: NO 00:01:53.359 Library execinfo found: NO 00:01:53.359 Has header "execinfo.h" : YES 00:01:53.359 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:53.359 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:53.359 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:53.359 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:53.359 Run-time dependency openssl found: YES 3.1.1 00:01:53.359 Run-time dependency libpcap found: YES 1.10.4 00:01:53.359 Has header "pcap.h" with dependency libpcap: YES 00:01:53.359 Compiler for C supports arguments -Wcast-qual: YES 00:01:53.359 Compiler for C supports arguments -Wdeprecated: YES 00:01:53.359 Compiler for C supports arguments -Wformat: YES 00:01:53.359 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:53.359 Compiler for C supports arguments -Wformat-security: NO 00:01:53.359 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:53.359 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:53.359 Compiler for C supports arguments -Wnested-externs: YES 00:01:53.359 Compiler for C supports arguments -Wold-style-definition: YES 00:01:53.359 Compiler for C supports arguments -Wpointer-arith: YES 00:01:53.360 Compiler for C supports arguments -Wsign-compare: YES 00:01:53.360 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:53.360 Compiler for C supports arguments -Wundef: YES 00:01:53.360 Compiler for C supports arguments -Wwrite-strings: YES 00:01:53.360 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:53.360 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:53.360 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:53.360 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:53.360 Program objdump found: YES (/usr/bin/objdump) 00:01:53.360 Compiler for C supports arguments -mavx512f: YES 00:01:53.360 Checking if "AVX512 checking" compiles: YES 00:01:53.360 Fetching value of define "__SSE4_2__" : 1 00:01:53.360 Fetching value of define "__AES__" : 1 00:01:53.360 Fetching value of define "__AVX__" : 1 00:01:53.360 Fetching value of define "__AVX2__" : 1 00:01:53.360 Fetching value of define "__AVX512BW__" : 1 00:01:53.360 Fetching value of define "__AVX512CD__" : 1 00:01:53.360 Fetching value of define "__AVX512DQ__" : 1 00:01:53.360 Fetching value of define "__AVX512F__" : 1 00:01:53.360 Fetching value of define "__AVX512VL__" : 1 00:01:53.360 Fetching value of define "__PCLMUL__" : 1 00:01:53.360 Fetching value of define "__RDRND__" : 1 00:01:53.360 Fetching value of define "__RDSEED__" : 1 00:01:53.360 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:53.360 Fetching value of define "__znver1__" : (undefined) 00:01:53.360 Fetching value of define "__znver2__" : (undefined) 00:01:53.360 Fetching value of define "__znver3__" : (undefined) 00:01:53.360 Fetching value of define "__znver4__" : (undefined) 00:01:53.360 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:53.360 Message: lib/log: Defining dependency "log" 00:01:53.360 Message: lib/kvargs: Defining dependency "kvargs" 00:01:53.360 Message: lib/telemetry: Defining dependency "telemetry" 00:01:53.360 Checking for function "getentropy" : NO 00:01:53.360 Message: lib/eal: Defining dependency "eal" 00:01:53.360 Message: lib/ring: Defining dependency "ring" 00:01:53.360 Message: lib/rcu: Defining dependency "rcu" 00:01:53.360 Message: lib/mempool: Defining dependency "mempool" 00:01:53.360 Message: lib/mbuf: Defining dependency "mbuf" 00:01:53.360 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:53.360 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:53.360 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:53.360 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:53.360 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:53.360 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:53.360 Compiler for C supports arguments -mpclmul: YES 00:01:53.360 Compiler for C supports arguments -maes: YES 00:01:53.360 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.360 Compiler for C supports arguments -mavx512bw: YES 00:01:53.360 Compiler for C supports arguments -mavx512dq: YES 00:01:53.360 Compiler for C supports arguments -mavx512vl: YES 00:01:53.360 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:53.360 Compiler for C supports arguments -mavx2: YES 00:01:53.360 Compiler for C supports arguments -mavx: YES 00:01:53.360 Message: lib/net: Defining dependency "net" 00:01:53.360 Message: lib/meter: Defining dependency "meter" 00:01:53.360 Message: lib/ethdev: Defining dependency "ethdev" 00:01:53.360 Message: lib/pci: Defining dependency "pci" 00:01:53.360 Message: lib/cmdline: Defining dependency "cmdline" 00:01:53.360 Message: lib/hash: Defining dependency "hash" 00:01:53.360 Message: lib/timer: Defining dependency "timer" 00:01:53.360 Message: lib/compressdev: Defining dependency "compressdev" 00:01:53.360 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:53.360 Message: lib/dmadev: Defining dependency "dmadev" 00:01:53.360 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:53.360 Message: lib/power: Defining dependency "power" 00:01:53.360 Message: lib/reorder: Defining dependency "reorder" 00:01:53.360 Message: lib/security: Defining dependency "security" 00:01:53.360 Has header "linux/userfaultfd.h" : YES 00:01:53.360 Has header "linux/vduse.h" : YES 00:01:53.360 Message: lib/vhost: Defining dependency "vhost" 00:01:53.360 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:53.360 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:53.360 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:53.360 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:53.360 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:53.360 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:53.360 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:53.360 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:53.360 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:53.360 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:53.360 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:53.360 Configuring doxy-api-html.conf using configuration 00:01:53.360 Configuring doxy-api-man.conf using configuration 00:01:53.360 Program mandb found: YES (/usr/bin/mandb) 00:01:53.360 Program sphinx-build found: NO 00:01:53.360 Configuring rte_build_config.h using configuration 00:01:53.360 Message: 00:01:53.360 ================= 00:01:53.360 Applications Enabled 00:01:53.360 ================= 00:01:53.360 00:01:53.360 apps: 00:01:53.360 00:01:53.360 00:01:53.360 Message: 00:01:53.360 ================= 00:01:53.360 Libraries Enabled 00:01:53.360 ================= 00:01:53.360 00:01:53.360 libs: 00:01:53.360 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:53.360 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:53.360 cryptodev, dmadev, power, reorder, security, vhost, 00:01:53.360 00:01:53.360 Message: 00:01:53.360 =============== 00:01:53.360 Drivers Enabled 00:01:53.360 =============== 00:01:53.360 00:01:53.360 common: 00:01:53.360 00:01:53.360 bus: 00:01:53.360 pci, vdev, 00:01:53.360 mempool: 00:01:53.360 ring, 00:01:53.360 dma: 00:01:53.360 00:01:53.360 net: 00:01:53.360 00:01:53.360 crypto: 00:01:53.360 00:01:53.360 compress: 00:01:53.360 00:01:53.360 vdpa: 00:01:53.360 00:01:53.360 00:01:53.360 Message: 00:01:53.360 ================= 00:01:53.360 Content Skipped 00:01:53.360 ================= 00:01:53.360 00:01:53.360 apps: 00:01:53.360 dumpcap: explicitly disabled via build config 00:01:53.360 graph: explicitly disabled via build config 00:01:53.360 pdump: explicitly disabled via build config 00:01:53.360 proc-info: explicitly disabled via build config 00:01:53.360 test-acl: explicitly disabled via build config 00:01:53.360 test-bbdev: explicitly disabled via build config 00:01:53.360 test-cmdline: explicitly disabled via build config 00:01:53.360 test-compress-perf: explicitly disabled via build config 00:01:53.360 test-crypto-perf: explicitly disabled via build config 00:01:53.360 test-dma-perf: explicitly disabled via build config 00:01:53.360 test-eventdev: explicitly disabled via build config 00:01:53.360 test-fib: explicitly disabled via build config 00:01:53.360 test-flow-perf: explicitly disabled via build config 00:01:53.360 test-gpudev: explicitly disabled via build config 00:01:53.360 test-mldev: explicitly disabled via build config 00:01:53.360 test-pipeline: explicitly disabled via build config 00:01:53.360 test-pmd: explicitly disabled via build config 00:01:53.360 test-regex: explicitly disabled via build config 00:01:53.360 test-sad: explicitly disabled via build config 00:01:53.360 test-security-perf: explicitly disabled via build config 00:01:53.360 00:01:53.360 libs: 00:01:53.360 argparse: explicitly disabled via build config 00:01:53.360 metrics: explicitly disabled via build config 00:01:53.360 acl: explicitly disabled via build config 00:01:53.360 bbdev: explicitly disabled via build config 00:01:53.360 bitratestats: explicitly disabled via build config 00:01:53.360 bpf: explicitly disabled via build config 00:01:53.360 cfgfile: explicitly disabled via build config 00:01:53.360 distributor: explicitly disabled via build config 00:01:53.360 efd: explicitly disabled via build config 00:01:53.360 eventdev: explicitly disabled via build config 00:01:53.360 dispatcher: explicitly disabled via build config 00:01:53.360 gpudev: explicitly disabled via build config 00:01:53.360 gro: explicitly disabled via build config 00:01:53.360 gso: explicitly disabled via build config 00:01:53.360 ip_frag: explicitly disabled via build config 00:01:53.360 jobstats: explicitly disabled via build config 00:01:53.360 latencystats: explicitly disabled via build config 00:01:53.360 lpm: explicitly disabled via build config 00:01:53.360 member: explicitly disabled via build config 00:01:53.360 pcapng: explicitly disabled via build config 00:01:53.360 rawdev: explicitly disabled via build config 00:01:53.360 regexdev: explicitly disabled via build config 00:01:53.360 mldev: explicitly disabled via build config 00:01:53.360 rib: explicitly disabled via build config 00:01:53.360 sched: explicitly disabled via build config 00:01:53.360 stack: explicitly disabled via build config 00:01:53.360 ipsec: explicitly disabled via build config 00:01:53.360 pdcp: explicitly disabled via build config 00:01:53.360 fib: explicitly disabled via build config 00:01:53.360 port: explicitly disabled via build config 00:01:53.360 pdump: explicitly disabled via build config 00:01:53.360 table: explicitly disabled via build config 00:01:53.360 pipeline: explicitly disabled via build config 00:01:53.360 graph: explicitly disabled via build config 00:01:53.360 node: explicitly disabled via build config 00:01:53.360 00:01:53.360 drivers: 00:01:53.360 common/cpt: not in enabled drivers build config 00:01:53.360 common/dpaax: not in enabled drivers build config 00:01:53.360 common/iavf: not in enabled drivers build config 00:01:53.360 common/idpf: not in enabled drivers build config 00:01:53.360 common/ionic: not in enabled drivers build config 00:01:53.360 common/mvep: not in enabled drivers build config 00:01:53.361 common/octeontx: not in enabled drivers build config 00:01:53.361 bus/auxiliary: not in enabled drivers build config 00:01:53.361 bus/cdx: not in enabled drivers build config 00:01:53.361 bus/dpaa: not in enabled drivers build config 00:01:53.361 bus/fslmc: not in enabled drivers build config 00:01:53.361 bus/ifpga: not in enabled drivers build config 00:01:53.361 bus/platform: not in enabled drivers build config 00:01:53.361 bus/uacce: not in enabled drivers build config 00:01:53.361 bus/vmbus: not in enabled drivers build config 00:01:53.361 common/cnxk: not in enabled drivers build config 00:01:53.361 common/mlx5: not in enabled drivers build config 00:01:53.361 common/nfp: not in enabled drivers build config 00:01:53.361 common/nitrox: not in enabled drivers build config 00:01:53.361 common/qat: not in enabled drivers build config 00:01:53.361 common/sfc_efx: not in enabled drivers build config 00:01:53.361 mempool/bucket: not in enabled drivers build config 00:01:53.361 mempool/cnxk: not in enabled drivers build config 00:01:53.361 mempool/dpaa: not in enabled drivers build config 00:01:53.361 mempool/dpaa2: not in enabled drivers build config 00:01:53.361 mempool/octeontx: not in enabled drivers build config 00:01:53.361 mempool/stack: not in enabled drivers build config 00:01:53.361 dma/cnxk: not in enabled drivers build config 00:01:53.361 dma/dpaa: not in enabled drivers build config 00:01:53.361 dma/dpaa2: not in enabled drivers build config 00:01:53.361 dma/hisilicon: not in enabled drivers build config 00:01:53.361 dma/idxd: not in enabled drivers build config 00:01:53.361 dma/ioat: not in enabled drivers build config 00:01:53.361 dma/skeleton: not in enabled drivers build config 00:01:53.361 net/af_packet: not in enabled drivers build config 00:01:53.361 net/af_xdp: not in enabled drivers build config 00:01:53.361 net/ark: not in enabled drivers build config 00:01:53.361 net/atlantic: not in enabled drivers build config 00:01:53.361 net/avp: not in enabled drivers build config 00:01:53.361 net/axgbe: not in enabled drivers build config 00:01:53.361 net/bnx2x: not in enabled drivers build config 00:01:53.361 net/bnxt: not in enabled drivers build config 00:01:53.361 net/bonding: not in enabled drivers build config 00:01:53.361 net/cnxk: not in enabled drivers build config 00:01:53.361 net/cpfl: not in enabled drivers build config 00:01:53.361 net/cxgbe: not in enabled drivers build config 00:01:53.361 net/dpaa: not in enabled drivers build config 00:01:53.361 net/dpaa2: not in enabled drivers build config 00:01:53.361 net/e1000: not in enabled drivers build config 00:01:53.361 net/ena: not in enabled drivers build config 00:01:53.361 net/enetc: not in enabled drivers build config 00:01:53.361 net/enetfec: not in enabled drivers build config 00:01:53.361 net/enic: not in enabled drivers build config 00:01:53.361 net/failsafe: not in enabled drivers build config 00:01:53.361 net/fm10k: not in enabled drivers build config 00:01:53.361 net/gve: not in enabled drivers build config 00:01:53.361 net/hinic: not in enabled drivers build config 00:01:53.361 net/hns3: not in enabled drivers build config 00:01:53.361 net/i40e: not in enabled drivers build config 00:01:53.361 net/iavf: not in enabled drivers build config 00:01:53.361 net/ice: not in enabled drivers build config 00:01:53.361 net/idpf: not in enabled drivers build config 00:01:53.361 net/igc: not in enabled drivers build config 00:01:53.361 net/ionic: not in enabled drivers build config 00:01:53.361 net/ipn3ke: not in enabled drivers build config 00:01:53.361 net/ixgbe: not in enabled drivers build config 00:01:53.361 net/mana: not in enabled drivers build config 00:01:53.361 net/memif: not in enabled drivers build config 00:01:53.361 net/mlx4: not in enabled drivers build config 00:01:53.361 net/mlx5: not in enabled drivers build config 00:01:53.361 net/mvneta: not in enabled drivers build config 00:01:53.361 net/mvpp2: not in enabled drivers build config 00:01:53.361 net/netvsc: not in enabled drivers build config 00:01:53.361 net/nfb: not in enabled drivers build config 00:01:53.361 net/nfp: not in enabled drivers build config 00:01:53.361 net/ngbe: not in enabled drivers build config 00:01:53.361 net/null: not in enabled drivers build config 00:01:53.361 net/octeontx: not in enabled drivers build config 00:01:53.361 net/octeon_ep: not in enabled drivers build config 00:01:53.361 net/pcap: not in enabled drivers build config 00:01:53.361 net/pfe: not in enabled drivers build config 00:01:53.361 net/qede: not in enabled drivers build config 00:01:53.361 net/ring: not in enabled drivers build config 00:01:53.361 net/sfc: not in enabled drivers build config 00:01:53.361 net/softnic: not in enabled drivers build config 00:01:53.361 net/tap: not in enabled drivers build config 00:01:53.361 net/thunderx: not in enabled drivers build config 00:01:53.361 net/txgbe: not in enabled drivers build config 00:01:53.361 net/vdev_netvsc: not in enabled drivers build config 00:01:53.361 net/vhost: not in enabled drivers build config 00:01:53.361 net/virtio: not in enabled drivers build config 00:01:53.361 net/vmxnet3: not in enabled drivers build config 00:01:53.361 raw/*: missing internal dependency, "rawdev" 00:01:53.361 crypto/armv8: not in enabled drivers build config 00:01:53.361 crypto/bcmfs: not in enabled drivers build config 00:01:53.361 crypto/caam_jr: not in enabled drivers build config 00:01:53.361 crypto/ccp: not in enabled drivers build config 00:01:53.361 crypto/cnxk: not in enabled drivers build config 00:01:53.361 crypto/dpaa_sec: not in enabled drivers build config 00:01:53.361 crypto/dpaa2_sec: not in enabled drivers build config 00:01:53.361 crypto/ipsec_mb: not in enabled drivers build config 00:01:53.361 crypto/mlx5: not in enabled drivers build config 00:01:53.361 crypto/mvsam: not in enabled drivers build config 00:01:53.361 crypto/nitrox: not in enabled drivers build config 00:01:53.361 crypto/null: not in enabled drivers build config 00:01:53.361 crypto/octeontx: not in enabled drivers build config 00:01:53.361 crypto/openssl: not in enabled drivers build config 00:01:53.361 crypto/scheduler: not in enabled drivers build config 00:01:53.361 crypto/uadk: not in enabled drivers build config 00:01:53.361 crypto/virtio: not in enabled drivers build config 00:01:53.361 compress/isal: not in enabled drivers build config 00:01:53.361 compress/mlx5: not in enabled drivers build config 00:01:53.361 compress/nitrox: not in enabled drivers build config 00:01:53.361 compress/octeontx: not in enabled drivers build config 00:01:53.361 compress/zlib: not in enabled drivers build config 00:01:53.361 regex/*: missing internal dependency, "regexdev" 00:01:53.361 ml/*: missing internal dependency, "mldev" 00:01:53.361 vdpa/ifc: not in enabled drivers build config 00:01:53.361 vdpa/mlx5: not in enabled drivers build config 00:01:53.361 vdpa/nfp: not in enabled drivers build config 00:01:53.361 vdpa/sfc: not in enabled drivers build config 00:01:53.361 event/*: missing internal dependency, "eventdev" 00:01:53.361 baseband/*: missing internal dependency, "bbdev" 00:01:53.361 gpu/*: missing internal dependency, "gpudev" 00:01:53.361 00:01:53.361 00:01:53.361 Build targets in project: 84 00:01:53.361 00:01:53.361 DPDK 24.03.0 00:01:53.361 00:01:53.361 User defined options 00:01:53.361 buildtype : debug 00:01:53.361 default_library : shared 00:01:53.361 libdir : lib 00:01:53.361 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:53.361 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:53.361 c_link_args : 00:01:53.361 cpu_instruction_set: native 00:01:53.361 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:53.361 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:53.361 enable_docs : false 00:01:53.361 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:53.361 enable_kmods : false 00:01:53.361 max_lcores : 128 00:01:53.361 tests : false 00:01:53.361 00:01:53.361 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.361 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:53.361 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:53.361 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:53.361 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:53.361 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:53.361 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:53.361 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:53.361 [7/267] Linking static target lib/librte_kvargs.a 00:01:53.361 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:53.361 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:53.361 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:53.361 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:53.361 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:53.361 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:53.361 [14/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:53.361 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:53.361 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:53.361 [17/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:53.361 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:53.361 [19/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:53.361 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:53.361 [21/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:53.361 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:53.361 [23/267] Linking static target lib/librte_log.a 00:01:53.361 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:53.361 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:53.361 [26/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:53.361 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:53.361 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:53.361 [29/267] Linking static target lib/librte_pci.a 00:01:53.362 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:53.362 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:53.362 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:53.362 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:53.362 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:53.622 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:53.622 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.622 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:53.622 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.622 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.622 [40/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.622 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:53.622 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:53.622 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:53.622 [44/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:53.622 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:53.622 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:53.622 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:53.622 [48/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:53.622 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:53.622 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:53.622 [51/267] Linking static target lib/librte_timer.a 00:01:53.622 [52/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:53.622 [53/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:53.622 [54/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:53.622 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:53.622 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:53.622 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:53.622 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:53.622 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:53.882 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:53.882 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:53.882 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:53.882 [63/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:53.882 [64/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:53.882 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:53.882 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:53.882 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:53.882 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:53.882 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:53.882 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:53.882 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:53.882 [72/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:53.882 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:53.882 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:53.882 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:53.882 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:53.882 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:53.882 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:53.882 [79/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:53.882 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:53.882 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:53.882 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:53.882 [83/267] Linking static target lib/librte_telemetry.a 00:01:53.882 [84/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:53.882 [85/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:53.882 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:53.882 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:53.882 [88/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:53.882 [89/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:53.882 [90/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:53.882 [91/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:53.882 [92/267] Linking static target lib/librte_meter.a 00:01:53.882 [93/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:53.882 [94/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:53.882 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:53.882 [96/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:53.882 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:53.882 [98/267] Linking static target lib/librte_ring.a 00:01:53.882 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:53.882 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:53.882 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:53.882 [102/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:53.882 [103/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.882 [104/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:53.882 [105/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:53.882 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:53.882 [107/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:53.882 [108/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:53.882 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:53.882 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:53.882 [111/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:53.882 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:53.882 [113/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:53.882 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:53.882 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:53.882 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:53.882 [117/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:53.882 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:53.882 [119/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.882 [120/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:53.882 [121/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.882 [122/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:53.882 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:53.882 [124/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.882 [125/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:53.882 [126/267] Linking static target lib/librte_cmdline.a 00:01:53.882 [127/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:53.882 [128/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:53.882 [129/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.882 [130/267] Linking static target lib/librte_reorder.a 00:01:53.882 [131/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:53.882 [132/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.882 [133/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.882 [134/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:53.882 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:53.882 [136/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:53.882 [137/267] Linking static target lib/librte_net.a 00:01:53.882 [138/267] Linking static target lib/librte_compressdev.a 00:01:53.882 [139/267] Linking static target lib/librte_mempool.a 00:01:53.882 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:53.882 [141/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:53.882 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:53.882 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:53.882 [144/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:53.882 [145/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:53.882 [146/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.882 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:53.882 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:53.882 [149/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:53.883 [150/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:53.883 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:53.883 [152/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:53.883 [153/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:53.883 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.883 [155/267] Linking target lib/librte_log.so.24.1 00:01:53.883 [156/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:53.883 [157/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:53.883 [158/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.883 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:53.883 [160/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:53.883 [161/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:53.883 [162/267] Linking static target lib/librte_dmadev.a 00:01:54.142 [163/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:54.142 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:54.142 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:54.142 [166/267] Linking static target lib/librte_power.a 00:01:54.142 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:54.142 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:54.142 [169/267] Linking static target lib/librte_rcu.a 00:01:54.142 [170/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:54.142 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:54.142 [172/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:54.142 [173/267] Linking static target lib/librte_eal.a 00:01:54.142 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:54.142 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:54.142 [176/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:54.142 [177/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:54.142 [178/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:54.142 [179/267] Linking static target lib/librte_mbuf.a 00:01:54.142 [180/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:54.142 [181/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.142 [182/267] Linking static target lib/librte_security.a 00:01:54.142 [183/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:54.142 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:54.142 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:54.142 [186/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.142 [187/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:54.142 [188/267] Linking target lib/librte_kvargs.so.24.1 00:01:54.142 [189/267] Linking static target lib/librte_hash.a 00:01:54.142 [190/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.142 [191/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.142 [192/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.142 [193/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.142 [194/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.142 [195/267] Linking static target drivers/librte_bus_vdev.a 00:01:54.142 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.142 [197/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.142 [198/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.142 [199/267] Linking static target drivers/librte_bus_pci.a 00:01:54.142 [200/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:54.401 [201/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.401 [202/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.401 [203/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.401 [204/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:54.401 [205/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:54.401 [206/267] Linking static target drivers/librte_mempool_ring.a 00:01:54.401 [207/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.401 [208/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:54.401 [209/267] Linking static target lib/librte_cryptodev.a 00:01:54.401 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.401 [211/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.401 [212/267] Linking target lib/librte_telemetry.so.24.1 00:01:54.661 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.661 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:54.661 [215/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.661 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.661 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:54.661 [218/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.921 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.921 [220/267] Linking static target lib/librte_ethdev.a 00:01:54.921 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.921 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.921 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.180 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.180 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.180 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.122 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:56.122 [228/267] Linking static target lib/librte_vhost.a 00:01:56.692 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.081 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.688 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.630 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.630 [233/267] Linking target lib/librte_eal.so.24.1 00:02:05.891 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:05.891 [235/267] Linking target lib/librte_meter.so.24.1 00:02:05.891 [236/267] Linking target lib/librte_timer.so.24.1 00:02:05.891 [237/267] Linking target lib/librte_ring.so.24.1 00:02:05.891 [238/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:05.891 [239/267] Linking target lib/librte_pci.so.24.1 00:02:05.891 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:05.891 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:05.891 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:05.891 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:05.891 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:05.891 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:05.891 [246/267] Linking target lib/librte_mempool.so.24.1 00:02:05.891 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:05.891 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:06.152 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:06.152 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:06.152 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:06.152 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:06.413 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:06.413 [254/267] Linking target lib/librte_reorder.so.24.1 00:02:06.413 [255/267] Linking target lib/librte_net.so.24.1 00:02:06.413 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:06.413 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:06.413 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:06.413 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:06.673 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:06.673 [261/267] Linking target lib/librte_hash.so.24.1 00:02:06.673 [262/267] Linking target lib/librte_security.so.24.1 00:02:06.673 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:06.673 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:06.673 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:06.673 [266/267] Linking target lib/librte_power.so.24.1 00:02:06.673 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:06.673 INFO: autodetecting backend as ninja 00:02:06.673 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:10.876 CC lib/log/log.o 00:02:10.876 CC lib/log/log_flags.o 00:02:10.876 CC lib/log/log_deprecated.o 00:02:10.876 CC lib/ut/ut.o 00:02:10.876 CC lib/ut_mock/mock.o 00:02:10.876 LIB libspdk_ut.a 00:02:10.876 LIB libspdk_ut_mock.a 00:02:10.876 LIB libspdk_log.a 00:02:10.876 SO libspdk_ut.so.2.0 00:02:10.876 SO libspdk_ut_mock.so.6.0 00:02:10.876 SO libspdk_log.so.7.1 00:02:11.142 SYMLINK libspdk_ut.so 00:02:11.142 SYMLINK libspdk_ut_mock.so 00:02:11.142 SYMLINK libspdk_log.so 00:02:11.403 CC lib/util/base64.o 00:02:11.403 CC lib/util/bit_array.o 00:02:11.403 CC lib/util/cpuset.o 00:02:11.403 CXX lib/trace_parser/trace.o 00:02:11.403 CC lib/util/crc16.o 00:02:11.403 CC lib/util/crc32.o 00:02:11.403 CC lib/util/crc32c.o 00:02:11.403 CC lib/util/crc32_ieee.o 00:02:11.403 CC lib/util/crc64.o 00:02:11.403 CC lib/util/dif.o 00:02:11.403 CC lib/util/fd.o 00:02:11.403 CC lib/util/fd_group.o 00:02:11.403 CC lib/dma/dma.o 00:02:11.403 CC lib/ioat/ioat.o 00:02:11.403 CC lib/util/file.o 00:02:11.403 CC lib/util/hexlify.o 00:02:11.403 CC lib/util/iov.o 00:02:11.403 CC lib/util/math.o 00:02:11.403 CC lib/util/net.o 00:02:11.403 CC lib/util/pipe.o 00:02:11.403 CC lib/util/strerror_tls.o 00:02:11.403 CC lib/util/string.o 00:02:11.403 CC lib/util/uuid.o 00:02:11.403 CC lib/util/xor.o 00:02:11.403 CC lib/util/zipf.o 00:02:11.403 CC lib/util/md5.o 00:02:11.663 CC lib/vfio_user/host/vfio_user_pci.o 00:02:11.663 CC lib/vfio_user/host/vfio_user.o 00:02:11.663 LIB libspdk_dma.a 00:02:11.663 SO libspdk_dma.so.5.0 00:02:11.663 LIB libspdk_ioat.a 00:02:11.663 SO libspdk_ioat.so.7.0 00:02:11.663 SYMLINK libspdk_dma.so 00:02:11.923 SYMLINK libspdk_ioat.so 00:02:11.923 LIB libspdk_vfio_user.a 00:02:11.923 SO libspdk_vfio_user.so.5.0 00:02:11.923 LIB libspdk_util.a 00:02:11.923 SYMLINK libspdk_vfio_user.so 00:02:11.923 SO libspdk_util.so.10.1 00:02:12.184 SYMLINK libspdk_util.so 00:02:12.184 LIB libspdk_trace_parser.a 00:02:12.184 SO libspdk_trace_parser.so.6.0 00:02:12.445 SYMLINK libspdk_trace_parser.so 00:02:12.445 CC lib/rdma_utils/rdma_utils.o 00:02:12.445 CC lib/conf/conf.o 00:02:12.445 CC lib/json/json_parse.o 00:02:12.445 CC lib/vmd/vmd.o 00:02:12.445 CC lib/idxd/idxd.o 00:02:12.445 CC lib/json/json_util.o 00:02:12.445 CC lib/vmd/led.o 00:02:12.445 CC lib/idxd/idxd_user.o 00:02:12.445 CC lib/json/json_write.o 00:02:12.445 CC lib/env_dpdk/env.o 00:02:12.445 CC lib/idxd/idxd_kernel.o 00:02:12.445 CC lib/env_dpdk/memory.o 00:02:12.445 CC lib/env_dpdk/pci.o 00:02:12.445 CC lib/env_dpdk/init.o 00:02:12.445 CC lib/env_dpdk/threads.o 00:02:12.445 CC lib/env_dpdk/pci_ioat.o 00:02:12.445 CC lib/env_dpdk/pci_virtio.o 00:02:12.445 CC lib/env_dpdk/pci_vmd.o 00:02:12.445 CC lib/env_dpdk/pci_idxd.o 00:02:12.445 CC lib/env_dpdk/pci_event.o 00:02:12.445 CC lib/env_dpdk/sigbus_handler.o 00:02:12.445 CC lib/env_dpdk/pci_dpdk.o 00:02:12.445 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:12.445 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:12.707 LIB libspdk_conf.a 00:02:12.969 SO libspdk_conf.so.6.0 00:02:12.969 LIB libspdk_rdma_utils.a 00:02:12.969 LIB libspdk_json.a 00:02:12.969 SO libspdk_rdma_utils.so.1.0 00:02:12.969 SYMLINK libspdk_conf.so 00:02:12.969 SO libspdk_json.so.6.0 00:02:12.969 SYMLINK libspdk_rdma_utils.so 00:02:12.969 SYMLINK libspdk_json.so 00:02:13.231 LIB libspdk_idxd.a 00:02:13.231 LIB libspdk_vmd.a 00:02:13.231 SO libspdk_idxd.so.12.1 00:02:13.231 SO libspdk_vmd.so.6.0 00:02:13.231 SYMLINK libspdk_idxd.so 00:02:13.231 SYMLINK libspdk_vmd.so 00:02:13.231 CC lib/rdma_provider/common.o 00:02:13.231 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:13.231 CC lib/jsonrpc/jsonrpc_server.o 00:02:13.231 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:13.231 CC lib/jsonrpc/jsonrpc_client.o 00:02:13.231 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:13.492 LIB libspdk_rdma_provider.a 00:02:13.492 SO libspdk_rdma_provider.so.7.0 00:02:13.492 LIB libspdk_jsonrpc.a 00:02:13.752 SO libspdk_jsonrpc.so.6.0 00:02:13.752 SYMLINK libspdk_rdma_provider.so 00:02:13.752 SYMLINK libspdk_jsonrpc.so 00:02:13.752 LIB libspdk_env_dpdk.a 00:02:14.013 SO libspdk_env_dpdk.so.15.1 00:02:14.013 SYMLINK libspdk_env_dpdk.so 00:02:14.013 CC lib/rpc/rpc.o 00:02:14.275 LIB libspdk_rpc.a 00:02:14.275 SO libspdk_rpc.so.6.0 00:02:14.535 SYMLINK libspdk_rpc.so 00:02:14.795 CC lib/trace/trace.o 00:02:14.795 CC lib/trace/trace_flags.o 00:02:14.795 CC lib/trace/trace_rpc.o 00:02:14.795 CC lib/keyring/keyring.o 00:02:14.795 CC lib/keyring/keyring_rpc.o 00:02:14.795 CC lib/notify/notify.o 00:02:14.795 CC lib/notify/notify_rpc.o 00:02:15.056 LIB libspdk_notify.a 00:02:15.056 SO libspdk_notify.so.6.0 00:02:15.056 LIB libspdk_keyring.a 00:02:15.056 LIB libspdk_trace.a 00:02:15.056 SO libspdk_keyring.so.2.0 00:02:15.056 SYMLINK libspdk_notify.so 00:02:15.056 SO libspdk_trace.so.11.0 00:02:15.056 SYMLINK libspdk_keyring.so 00:02:15.056 SYMLINK libspdk_trace.so 00:02:15.627 CC lib/sock/sock.o 00:02:15.627 CC lib/sock/sock_rpc.o 00:02:15.627 CC lib/thread/thread.o 00:02:15.627 CC lib/thread/iobuf.o 00:02:15.888 LIB libspdk_sock.a 00:02:15.888 SO libspdk_sock.so.10.0 00:02:15.888 SYMLINK libspdk_sock.so 00:02:16.462 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:16.462 CC lib/nvme/nvme_ctrlr.o 00:02:16.462 CC lib/nvme/nvme_fabric.o 00:02:16.462 CC lib/nvme/nvme_ns_cmd.o 00:02:16.462 CC lib/nvme/nvme_ns.o 00:02:16.462 CC lib/nvme/nvme_pcie_common.o 00:02:16.462 CC lib/nvme/nvme_pcie.o 00:02:16.462 CC lib/nvme/nvme_qpair.o 00:02:16.462 CC lib/nvme/nvme.o 00:02:16.462 CC lib/nvme/nvme_quirks.o 00:02:16.462 CC lib/nvme/nvme_transport.o 00:02:16.462 CC lib/nvme/nvme_discovery.o 00:02:16.462 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:16.462 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:16.462 CC lib/nvme/nvme_tcp.o 00:02:16.462 CC lib/nvme/nvme_opal.o 00:02:16.462 CC lib/nvme/nvme_io_msg.o 00:02:16.462 CC lib/nvme/nvme_poll_group.o 00:02:16.462 CC lib/nvme/nvme_zns.o 00:02:16.462 CC lib/nvme/nvme_stubs.o 00:02:16.462 CC lib/nvme/nvme_auth.o 00:02:16.462 CC lib/nvme/nvme_cuse.o 00:02:16.462 CC lib/nvme/nvme_vfio_user.o 00:02:16.462 CC lib/nvme/nvme_rdma.o 00:02:17.035 LIB libspdk_thread.a 00:02:17.035 SO libspdk_thread.so.11.0 00:02:17.035 SYMLINK libspdk_thread.so 00:02:17.298 CC lib/accel/accel.o 00:02:17.298 CC lib/accel/accel_sw.o 00:02:17.298 CC lib/accel/accel_rpc.o 00:02:17.298 CC lib/virtio/virtio.o 00:02:17.298 CC lib/virtio/virtio_vhost_user.o 00:02:17.298 CC lib/virtio/virtio_vfio_user.o 00:02:17.298 CC lib/vfu_tgt/tgt_endpoint.o 00:02:17.298 CC lib/vfu_tgt/tgt_rpc.o 00:02:17.298 CC lib/virtio/virtio_pci.o 00:02:17.298 CC lib/blob/blobstore.o 00:02:17.298 CC lib/fsdev/fsdev.o 00:02:17.298 CC lib/blob/request.o 00:02:17.298 CC lib/fsdev/fsdev_io.o 00:02:17.298 CC lib/blob/zeroes.o 00:02:17.298 CC lib/fsdev/fsdev_rpc.o 00:02:17.298 CC lib/blob/blob_bs_dev.o 00:02:17.298 CC lib/init/json_config.o 00:02:17.298 CC lib/init/subsystem.o 00:02:17.298 CC lib/init/subsystem_rpc.o 00:02:17.298 CC lib/init/rpc.o 00:02:17.560 LIB libspdk_init.a 00:02:17.823 LIB libspdk_virtio.a 00:02:17.823 SO libspdk_init.so.6.0 00:02:17.823 LIB libspdk_vfu_tgt.a 00:02:17.823 SO libspdk_virtio.so.7.0 00:02:17.823 SO libspdk_vfu_tgt.so.3.0 00:02:17.823 SYMLINK libspdk_init.so 00:02:17.823 SYMLINK libspdk_virtio.so 00:02:17.823 SYMLINK libspdk_vfu_tgt.so 00:02:18.084 LIB libspdk_fsdev.a 00:02:18.084 SO libspdk_fsdev.so.2.0 00:02:18.084 SYMLINK libspdk_fsdev.so 00:02:18.084 CC lib/event/app.o 00:02:18.084 CC lib/event/reactor.o 00:02:18.084 CC lib/event/log_rpc.o 00:02:18.084 CC lib/event/app_rpc.o 00:02:18.084 CC lib/event/scheduler_static.o 00:02:18.345 LIB libspdk_accel.a 00:02:18.345 LIB libspdk_nvme.a 00:02:18.345 SO libspdk_accel.so.16.0 00:02:18.605 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:18.605 SO libspdk_nvme.so.15.0 00:02:18.605 SYMLINK libspdk_accel.so 00:02:18.605 LIB libspdk_event.a 00:02:18.605 SO libspdk_event.so.14.0 00:02:18.605 SYMLINK libspdk_event.so 00:02:18.865 SYMLINK libspdk_nvme.so 00:02:18.865 CC lib/bdev/bdev.o 00:02:18.865 CC lib/bdev/bdev_rpc.o 00:02:18.865 CC lib/bdev/bdev_zone.o 00:02:18.865 CC lib/bdev/part.o 00:02:18.865 CC lib/bdev/scsi_nvme.o 00:02:19.126 LIB libspdk_fuse_dispatcher.a 00:02:19.126 SO libspdk_fuse_dispatcher.so.1.0 00:02:19.126 SYMLINK libspdk_fuse_dispatcher.so 00:02:20.069 LIB libspdk_blob.a 00:02:20.069 SO libspdk_blob.so.12.0 00:02:20.069 SYMLINK libspdk_blob.so 00:02:20.642 CC lib/lvol/lvol.o 00:02:20.642 CC lib/blobfs/blobfs.o 00:02:20.642 CC lib/blobfs/tree.o 00:02:21.215 LIB libspdk_bdev.a 00:02:21.215 LIB libspdk_blobfs.a 00:02:21.215 SO libspdk_bdev.so.17.0 00:02:21.215 SO libspdk_blobfs.so.11.0 00:02:21.477 LIB libspdk_lvol.a 00:02:21.477 SYMLINK libspdk_bdev.so 00:02:21.477 SO libspdk_lvol.so.11.0 00:02:21.477 SYMLINK libspdk_blobfs.so 00:02:21.477 SYMLINK libspdk_lvol.so 00:02:21.739 CC lib/ftl/ftl_core.o 00:02:21.739 CC lib/ftl/ftl_init.o 00:02:21.739 CC lib/ftl/ftl_layout.o 00:02:21.739 CC lib/ftl/ftl_io.o 00:02:21.739 CC lib/ftl/ftl_debug.o 00:02:21.739 CC lib/ftl/ftl_sb.o 00:02:21.739 CC lib/nbd/nbd.o 00:02:21.739 CC lib/nvmf/ctrlr.o 00:02:21.739 CC lib/ftl/ftl_l2p.o 00:02:21.739 CC lib/ftl/ftl_l2p_flat.o 00:02:21.739 CC lib/nbd/nbd_rpc.o 00:02:21.739 CC lib/nvmf/ctrlr_discovery.o 00:02:21.739 CC lib/ftl/ftl_nv_cache.o 00:02:21.739 CC lib/scsi/dev.o 00:02:21.739 CC lib/nvmf/ctrlr_bdev.o 00:02:21.739 CC lib/scsi/lun.o 00:02:21.739 CC lib/ftl/ftl_band.o 00:02:21.739 CC lib/ublk/ublk.o 00:02:21.739 CC lib/ftl/ftl_band_ops.o 00:02:21.739 CC lib/scsi/port.o 00:02:21.739 CC lib/nvmf/subsystem.o 00:02:21.739 CC lib/ublk/ublk_rpc.o 00:02:21.739 CC lib/scsi/scsi.o 00:02:21.739 CC lib/nvmf/nvmf.o 00:02:21.739 CC lib/ftl/ftl_writer.o 00:02:21.739 CC lib/nvmf/nvmf_rpc.o 00:02:21.739 CC lib/ftl/ftl_rq.o 00:02:21.739 CC lib/scsi/scsi_bdev.o 00:02:21.739 CC lib/scsi/scsi_pr.o 00:02:21.739 CC lib/ftl/ftl_reloc.o 00:02:21.739 CC lib/nvmf/transport.o 00:02:21.739 CC lib/ftl/ftl_l2p_cache.o 00:02:21.739 CC lib/scsi/scsi_rpc.o 00:02:21.739 CC lib/nvmf/tcp.o 00:02:21.739 CC lib/scsi/task.o 00:02:21.739 CC lib/ftl/ftl_p2l.o 00:02:21.739 CC lib/nvmf/stubs.o 00:02:21.739 CC lib/ftl/ftl_p2l_log.o 00:02:21.739 CC lib/nvmf/mdns_server.o 00:02:21.739 CC lib/nvmf/vfio_user.o 00:02:21.739 CC lib/ftl/mngt/ftl_mngt.o 00:02:21.739 CC lib/nvmf/rdma.o 00:02:21.739 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:21.739 CC lib/nvmf/auth.o 00:02:21.739 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:21.739 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:21.739 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:21.739 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:21.739 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:21.739 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:21.739 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:21.739 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:21.739 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:21.739 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:21.739 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:21.739 CC lib/ftl/utils/ftl_conf.o 00:02:21.739 CC lib/ftl/utils/ftl_md.o 00:02:21.739 CC lib/ftl/utils/ftl_mempool.o 00:02:21.739 CC lib/ftl/utils/ftl_bitmap.o 00:02:21.739 CC lib/ftl/utils/ftl_property.o 00:02:21.739 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:21.739 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:21.739 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:21.739 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:21.739 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:21.739 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:21.739 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:21.739 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:21.739 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:21.739 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:21.739 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:21.739 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:21.739 CC lib/ftl/base/ftl_base_bdev.o 00:02:21.739 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:21.739 CC lib/ftl/base/ftl_base_dev.o 00:02:21.739 CC lib/ftl/ftl_trace.o 00:02:22.420 LIB libspdk_nbd.a 00:02:22.420 SO libspdk_nbd.so.7.0 00:02:22.420 SYMLINK libspdk_nbd.so 00:02:22.420 LIB libspdk_scsi.a 00:02:22.782 SO libspdk_scsi.so.9.0 00:02:22.782 SYMLINK libspdk_scsi.so 00:02:22.782 LIB libspdk_ublk.a 00:02:22.782 SO libspdk_ublk.so.3.0 00:02:23.046 SYMLINK libspdk_ublk.so 00:02:23.046 LIB libspdk_ftl.a 00:02:23.046 CC lib/iscsi/conn.o 00:02:23.046 CC lib/iscsi/init_grp.o 00:02:23.046 CC lib/iscsi/iscsi.o 00:02:23.046 CC lib/iscsi/param.o 00:02:23.046 CC lib/iscsi/portal_grp.o 00:02:23.046 CC lib/iscsi/tgt_node.o 00:02:23.046 CC lib/iscsi/iscsi_subsystem.o 00:02:23.046 CC lib/iscsi/iscsi_rpc.o 00:02:23.046 CC lib/vhost/vhost.o 00:02:23.046 CC lib/iscsi/task.o 00:02:23.046 CC lib/vhost/vhost_rpc.o 00:02:23.046 CC lib/vhost/vhost_scsi.o 00:02:23.046 CC lib/vhost/vhost_blk.o 00:02:23.046 CC lib/vhost/rte_vhost_user.o 00:02:23.046 SO libspdk_ftl.so.9.0 00:02:23.614 SYMLINK libspdk_ftl.so 00:02:23.875 LIB libspdk_nvmf.a 00:02:23.875 SO libspdk_nvmf.so.20.0 00:02:24.136 LIB libspdk_vhost.a 00:02:24.136 SO libspdk_vhost.so.8.0 00:02:24.136 SYMLINK libspdk_nvmf.so 00:02:24.136 SYMLINK libspdk_vhost.so 00:02:24.396 LIB libspdk_iscsi.a 00:02:24.396 SO libspdk_iscsi.so.8.0 00:02:24.396 SYMLINK libspdk_iscsi.so 00:02:24.967 CC module/env_dpdk/env_dpdk_rpc.o 00:02:24.967 CC module/vfu_device/vfu_virtio.o 00:02:24.967 CC module/vfu_device/vfu_virtio_blk.o 00:02:24.967 CC module/vfu_device/vfu_virtio_scsi.o 00:02:24.967 CC module/vfu_device/vfu_virtio_rpc.o 00:02:24.967 CC module/vfu_device/vfu_virtio_fs.o 00:02:25.227 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:25.227 LIB libspdk_env_dpdk_rpc.a 00:02:25.227 CC module/keyring/linux/keyring.o 00:02:25.227 CC module/blob/bdev/blob_bdev.o 00:02:25.227 CC module/keyring/linux/keyring_rpc.o 00:02:25.227 CC module/accel/iaa/accel_iaa.o 00:02:25.227 CC module/accel/iaa/accel_iaa_rpc.o 00:02:25.227 CC module/keyring/file/keyring.o 00:02:25.227 CC module/accel/error/accel_error.o 00:02:25.227 CC module/keyring/file/keyring_rpc.o 00:02:25.227 CC module/accel/error/accel_error_rpc.o 00:02:25.227 CC module/fsdev/aio/fsdev_aio.o 00:02:25.227 CC module/accel/ioat/accel_ioat.o 00:02:25.227 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:25.227 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:25.227 CC module/accel/ioat/accel_ioat_rpc.o 00:02:25.227 CC module/fsdev/aio/linux_aio_mgr.o 00:02:25.227 CC module/scheduler/gscheduler/gscheduler.o 00:02:25.227 CC module/accel/dsa/accel_dsa.o 00:02:25.227 CC module/sock/posix/posix.o 00:02:25.227 CC module/accel/dsa/accel_dsa_rpc.o 00:02:25.227 SO libspdk_env_dpdk_rpc.so.6.0 00:02:25.488 SYMLINK libspdk_env_dpdk_rpc.so 00:02:25.488 LIB libspdk_keyring_linux.a 00:02:25.488 LIB libspdk_keyring_file.a 00:02:25.488 LIB libspdk_scheduler_dpdk_governor.a 00:02:25.488 SO libspdk_keyring_linux.so.1.0 00:02:25.488 LIB libspdk_scheduler_dynamic.a 00:02:25.488 LIB libspdk_scheduler_gscheduler.a 00:02:25.488 SO libspdk_keyring_file.so.2.0 00:02:25.488 LIB libspdk_accel_ioat.a 00:02:25.488 LIB libspdk_accel_iaa.a 00:02:25.488 LIB libspdk_accel_error.a 00:02:25.488 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:25.488 SO libspdk_scheduler_dynamic.so.4.0 00:02:25.488 SO libspdk_scheduler_gscheduler.so.4.0 00:02:25.488 SO libspdk_accel_error.so.2.0 00:02:25.488 SO libspdk_accel_ioat.so.6.0 00:02:25.488 SO libspdk_accel_iaa.so.3.0 00:02:25.488 SYMLINK libspdk_keyring_linux.so 00:02:25.488 LIB libspdk_blob_bdev.a 00:02:25.488 SYMLINK libspdk_keyring_file.so 00:02:25.488 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:25.488 SO libspdk_blob_bdev.so.12.0 00:02:25.488 LIB libspdk_accel_dsa.a 00:02:25.488 SYMLINK libspdk_accel_error.so 00:02:25.770 SYMLINK libspdk_scheduler_dynamic.so 00:02:25.770 SYMLINK libspdk_scheduler_gscheduler.so 00:02:25.770 SYMLINK libspdk_accel_ioat.so 00:02:25.770 SYMLINK libspdk_accel_iaa.so 00:02:25.770 SO libspdk_accel_dsa.so.5.0 00:02:25.770 LIB libspdk_vfu_device.a 00:02:25.770 SYMLINK libspdk_blob_bdev.so 00:02:25.770 SO libspdk_vfu_device.so.3.0 00:02:25.770 SYMLINK libspdk_accel_dsa.so 00:02:25.770 SYMLINK libspdk_vfu_device.so 00:02:26.030 LIB libspdk_fsdev_aio.a 00:02:26.030 SO libspdk_fsdev_aio.so.1.0 00:02:26.030 LIB libspdk_sock_posix.a 00:02:26.030 SYMLINK libspdk_fsdev_aio.so 00:02:26.030 SO libspdk_sock_posix.so.6.0 00:02:26.030 SYMLINK libspdk_sock_posix.so 00:02:26.292 CC module/bdev/delay/vbdev_delay.o 00:02:26.292 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:26.292 CC module/bdev/error/vbdev_error.o 00:02:26.292 CC module/bdev/lvol/vbdev_lvol.o 00:02:26.292 CC module/bdev/error/vbdev_error_rpc.o 00:02:26.292 CC module/bdev/passthru/vbdev_passthru.o 00:02:26.292 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:26.292 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:26.292 CC module/blobfs/bdev/blobfs_bdev.o 00:02:26.292 CC module/bdev/gpt/gpt.o 00:02:26.292 CC module/bdev/null/bdev_null.o 00:02:26.292 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:26.292 CC module/bdev/null/bdev_null_rpc.o 00:02:26.292 CC module/bdev/gpt/vbdev_gpt.o 00:02:26.292 CC module/bdev/aio/bdev_aio.o 00:02:26.292 CC module/bdev/nvme/bdev_nvme.o 00:02:26.292 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:26.292 CC module/bdev/malloc/bdev_malloc.o 00:02:26.292 CC module/bdev/aio/bdev_aio_rpc.o 00:02:26.292 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:26.292 CC module/bdev/nvme/bdev_mdns_client.o 00:02:26.292 CC module/bdev/nvme/nvme_rpc.o 00:02:26.292 CC module/bdev/nvme/vbdev_opal.o 00:02:26.292 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:26.292 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:26.292 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:26.292 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:26.292 CC module/bdev/raid/bdev_raid.o 00:02:26.292 CC module/bdev/iscsi/bdev_iscsi.o 00:02:26.292 CC module/bdev/raid/bdev_raid_rpc.o 00:02:26.292 CC module/bdev/ftl/bdev_ftl.o 00:02:26.292 CC module/bdev/raid/bdev_raid_sb.o 00:02:26.292 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:26.292 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:26.292 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:26.292 CC module/bdev/raid/raid0.o 00:02:26.292 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:26.292 CC module/bdev/raid/raid1.o 00:02:26.292 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:26.292 CC module/bdev/raid/concat.o 00:02:26.292 CC module/bdev/split/vbdev_split.o 00:02:26.292 CC module/bdev/split/vbdev_split_rpc.o 00:02:26.552 LIB libspdk_blobfs_bdev.a 00:02:26.552 SO libspdk_blobfs_bdev.so.6.0 00:02:26.552 LIB libspdk_bdev_null.a 00:02:26.552 LIB libspdk_bdev_gpt.a 00:02:26.552 LIB libspdk_bdev_split.a 00:02:26.552 LIB libspdk_bdev_error.a 00:02:26.552 SO libspdk_bdev_gpt.so.6.0 00:02:26.552 LIB libspdk_bdev_passthru.a 00:02:26.552 SYMLINK libspdk_blobfs_bdev.so 00:02:26.552 SO libspdk_bdev_null.so.6.0 00:02:26.552 SO libspdk_bdev_error.so.6.0 00:02:26.552 SO libspdk_bdev_split.so.6.0 00:02:26.552 LIB libspdk_bdev_ftl.a 00:02:26.812 SO libspdk_bdev_passthru.so.6.0 00:02:26.812 SYMLINK libspdk_bdev_gpt.so 00:02:26.812 LIB libspdk_bdev_aio.a 00:02:26.812 LIB libspdk_bdev_zone_block.a 00:02:26.812 SYMLINK libspdk_bdev_error.so 00:02:26.812 SYMLINK libspdk_bdev_null.so 00:02:26.812 SO libspdk_bdev_ftl.so.6.0 00:02:26.812 SYMLINK libspdk_bdev_split.so 00:02:26.812 LIB libspdk_bdev_delay.a 00:02:26.812 LIB libspdk_bdev_malloc.a 00:02:26.812 LIB libspdk_bdev_iscsi.a 00:02:26.812 SO libspdk_bdev_zone_block.so.6.0 00:02:26.812 SO libspdk_bdev_aio.so.6.0 00:02:26.812 SYMLINK libspdk_bdev_passthru.so 00:02:26.812 SO libspdk_bdev_delay.so.6.0 00:02:26.812 SO libspdk_bdev_iscsi.so.6.0 00:02:26.812 SO libspdk_bdev_malloc.so.6.0 00:02:26.812 SYMLINK libspdk_bdev_ftl.so 00:02:26.812 LIB libspdk_bdev_lvol.a 00:02:26.812 SYMLINK libspdk_bdev_zone_block.so 00:02:26.812 SYMLINK libspdk_bdev_aio.so 00:02:26.812 SYMLINK libspdk_bdev_delay.so 00:02:26.812 SYMLINK libspdk_bdev_malloc.so 00:02:26.812 SYMLINK libspdk_bdev_iscsi.so 00:02:26.812 LIB libspdk_bdev_virtio.a 00:02:26.812 SO libspdk_bdev_lvol.so.6.0 00:02:26.812 SO libspdk_bdev_virtio.so.6.0 00:02:27.072 SYMLINK libspdk_bdev_lvol.so 00:02:27.072 SYMLINK libspdk_bdev_virtio.so 00:02:27.333 LIB libspdk_bdev_raid.a 00:02:27.333 SO libspdk_bdev_raid.so.6.0 00:02:27.333 SYMLINK libspdk_bdev_raid.so 00:02:28.714 LIB libspdk_bdev_nvme.a 00:02:28.714 SO libspdk_bdev_nvme.so.7.1 00:02:28.714 SYMLINK libspdk_bdev_nvme.so 00:02:29.653 CC module/event/subsystems/iobuf/iobuf.o 00:02:29.653 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:29.653 CC module/event/subsystems/vmd/vmd.o 00:02:29.653 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:29.653 CC module/event/subsystems/keyring/keyring.o 00:02:29.653 CC module/event/subsystems/sock/sock.o 00:02:29.653 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:29.653 CC module/event/subsystems/scheduler/scheduler.o 00:02:29.653 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:29.653 CC module/event/subsystems/fsdev/fsdev.o 00:02:29.653 LIB libspdk_event_scheduler.a 00:02:29.653 LIB libspdk_event_vfu_tgt.a 00:02:29.653 LIB libspdk_event_keyring.a 00:02:29.653 LIB libspdk_event_vhost_blk.a 00:02:29.653 LIB libspdk_event_vmd.a 00:02:29.653 LIB libspdk_event_iobuf.a 00:02:29.653 LIB libspdk_event_sock.a 00:02:29.653 LIB libspdk_event_fsdev.a 00:02:29.653 SO libspdk_event_scheduler.so.4.0 00:02:29.653 SO libspdk_event_keyring.so.1.0 00:02:29.653 SO libspdk_event_vfu_tgt.so.3.0 00:02:29.653 SO libspdk_event_vhost_blk.so.3.0 00:02:29.653 SO libspdk_event_sock.so.5.0 00:02:29.653 SO libspdk_event_iobuf.so.3.0 00:02:29.653 SO libspdk_event_vmd.so.6.0 00:02:29.653 SO libspdk_event_fsdev.so.1.0 00:02:29.913 SYMLINK libspdk_event_scheduler.so 00:02:29.913 SYMLINK libspdk_event_sock.so 00:02:29.913 SYMLINK libspdk_event_iobuf.so 00:02:29.913 SYMLINK libspdk_event_keyring.so 00:02:29.913 SYMLINK libspdk_event_vhost_blk.so 00:02:29.913 SYMLINK libspdk_event_vfu_tgt.so 00:02:29.913 SYMLINK libspdk_event_fsdev.so 00:02:29.913 SYMLINK libspdk_event_vmd.so 00:02:30.174 CC module/event/subsystems/accel/accel.o 00:02:30.434 LIB libspdk_event_accel.a 00:02:30.434 SO libspdk_event_accel.so.6.0 00:02:30.434 SYMLINK libspdk_event_accel.so 00:02:30.694 CC module/event/subsystems/bdev/bdev.o 00:02:30.955 LIB libspdk_event_bdev.a 00:02:30.955 SO libspdk_event_bdev.so.6.0 00:02:30.955 SYMLINK libspdk_event_bdev.so 00:02:31.526 CC module/event/subsystems/scsi/scsi.o 00:02:31.526 CC module/event/subsystems/nbd/nbd.o 00:02:31.526 CC module/event/subsystems/ublk/ublk.o 00:02:31.526 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:31.526 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:31.526 LIB libspdk_event_nbd.a 00:02:31.526 LIB libspdk_event_ublk.a 00:02:31.526 LIB libspdk_event_scsi.a 00:02:31.526 SO libspdk_event_ublk.so.3.0 00:02:31.526 SO libspdk_event_nbd.so.6.0 00:02:31.526 SO libspdk_event_scsi.so.6.0 00:02:31.788 LIB libspdk_event_nvmf.a 00:02:31.788 SYMLINK libspdk_event_ublk.so 00:02:31.788 SYMLINK libspdk_event_nbd.so 00:02:31.788 SYMLINK libspdk_event_scsi.so 00:02:31.788 SO libspdk_event_nvmf.so.6.0 00:02:31.788 SYMLINK libspdk_event_nvmf.so 00:02:32.050 CC module/event/subsystems/iscsi/iscsi.o 00:02:32.050 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:32.311 LIB libspdk_event_vhost_scsi.a 00:02:32.311 LIB libspdk_event_iscsi.a 00:02:32.311 SO libspdk_event_vhost_scsi.so.3.0 00:02:32.311 SO libspdk_event_iscsi.so.6.0 00:02:32.311 SYMLINK libspdk_event_vhost_scsi.so 00:02:32.311 SYMLINK libspdk_event_iscsi.so 00:02:32.572 SO libspdk.so.6.0 00:02:32.572 SYMLINK libspdk.so 00:02:33.147 CC app/trace_record/trace_record.o 00:02:33.147 CXX app/trace/trace.o 00:02:33.148 CC app/spdk_nvme_discover/discovery_aer.o 00:02:33.148 CC app/spdk_nvme_identify/identify.o 00:02:33.148 CC app/spdk_lspci/spdk_lspci.o 00:02:33.148 CC app/spdk_nvme_perf/perf.o 00:02:33.148 CC test/rpc_client/rpc_client_test.o 00:02:33.148 TEST_HEADER include/spdk/accel.h 00:02:33.148 CC app/spdk_top/spdk_top.o 00:02:33.148 TEST_HEADER include/spdk/accel_module.h 00:02:33.148 TEST_HEADER include/spdk/assert.h 00:02:33.148 TEST_HEADER include/spdk/barrier.h 00:02:33.148 TEST_HEADER include/spdk/bdev.h 00:02:33.148 TEST_HEADER include/spdk/base64.h 00:02:33.148 TEST_HEADER include/spdk/bdev_module.h 00:02:33.148 TEST_HEADER include/spdk/bdev_zone.h 00:02:33.148 TEST_HEADER include/spdk/bit_array.h 00:02:33.148 TEST_HEADER include/spdk/bit_pool.h 00:02:33.148 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:33.148 TEST_HEADER include/spdk/blob_bdev.h 00:02:33.148 TEST_HEADER include/spdk/blobfs.h 00:02:33.148 TEST_HEADER include/spdk/blob.h 00:02:33.148 TEST_HEADER include/spdk/config.h 00:02:33.148 TEST_HEADER include/spdk/conf.h 00:02:33.148 TEST_HEADER include/spdk/cpuset.h 00:02:33.148 TEST_HEADER include/spdk/crc16.h 00:02:33.148 TEST_HEADER include/spdk/crc32.h 00:02:33.148 TEST_HEADER include/spdk/crc64.h 00:02:33.148 TEST_HEADER include/spdk/dif.h 00:02:33.148 TEST_HEADER include/spdk/dma.h 00:02:33.148 TEST_HEADER include/spdk/endian.h 00:02:33.148 TEST_HEADER include/spdk/env_dpdk.h 00:02:33.148 TEST_HEADER include/spdk/env.h 00:02:33.148 TEST_HEADER include/spdk/event.h 00:02:33.148 TEST_HEADER include/spdk/fd_group.h 00:02:33.148 TEST_HEADER include/spdk/fd.h 00:02:33.148 TEST_HEADER include/spdk/file.h 00:02:33.148 TEST_HEADER include/spdk/fsdev.h 00:02:33.148 TEST_HEADER include/spdk/fsdev_module.h 00:02:33.148 CC app/nvmf_tgt/nvmf_main.o 00:02:33.148 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:33.148 TEST_HEADER include/spdk/ftl.h 00:02:33.148 CC app/iscsi_tgt/iscsi_tgt.o 00:02:33.148 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:33.148 TEST_HEADER include/spdk/gpt_spec.h 00:02:33.148 TEST_HEADER include/spdk/hexlify.h 00:02:33.148 CC app/spdk_dd/spdk_dd.o 00:02:33.148 TEST_HEADER include/spdk/idxd.h 00:02:33.148 TEST_HEADER include/spdk/histogram_data.h 00:02:33.148 TEST_HEADER include/spdk/idxd_spec.h 00:02:33.148 TEST_HEADER include/spdk/ioat.h 00:02:33.148 TEST_HEADER include/spdk/init.h 00:02:33.148 TEST_HEADER include/spdk/ioat_spec.h 00:02:33.148 TEST_HEADER include/spdk/iscsi_spec.h 00:02:33.148 TEST_HEADER include/spdk/json.h 00:02:33.148 TEST_HEADER include/spdk/jsonrpc.h 00:02:33.148 TEST_HEADER include/spdk/keyring.h 00:02:33.148 TEST_HEADER include/spdk/keyring_module.h 00:02:33.148 TEST_HEADER include/spdk/likely.h 00:02:33.148 TEST_HEADER include/spdk/log.h 00:02:33.148 TEST_HEADER include/spdk/md5.h 00:02:33.148 TEST_HEADER include/spdk/lvol.h 00:02:33.148 TEST_HEADER include/spdk/memory.h 00:02:33.148 TEST_HEADER include/spdk/mmio.h 00:02:33.148 TEST_HEADER include/spdk/nbd.h 00:02:33.148 TEST_HEADER include/spdk/notify.h 00:02:33.148 TEST_HEADER include/spdk/net.h 00:02:33.148 CC app/spdk_tgt/spdk_tgt.o 00:02:33.148 TEST_HEADER include/spdk/nvme.h 00:02:33.148 TEST_HEADER include/spdk/nvme_intel.h 00:02:33.148 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:33.148 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:33.148 TEST_HEADER include/spdk/nvme_spec.h 00:02:33.148 TEST_HEADER include/spdk/nvme_zns.h 00:02:33.148 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:33.148 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:33.148 TEST_HEADER include/spdk/nvmf.h 00:02:33.148 TEST_HEADER include/spdk/nvmf_spec.h 00:02:33.148 TEST_HEADER include/spdk/nvmf_transport.h 00:02:33.148 TEST_HEADER include/spdk/opal.h 00:02:33.148 TEST_HEADER include/spdk/opal_spec.h 00:02:33.148 TEST_HEADER include/spdk/pci_ids.h 00:02:33.148 TEST_HEADER include/spdk/pipe.h 00:02:33.148 TEST_HEADER include/spdk/queue.h 00:02:33.148 TEST_HEADER include/spdk/reduce.h 00:02:33.148 TEST_HEADER include/spdk/rpc.h 00:02:33.148 TEST_HEADER include/spdk/scheduler.h 00:02:33.148 TEST_HEADER include/spdk/scsi.h 00:02:33.148 TEST_HEADER include/spdk/scsi_spec.h 00:02:33.148 TEST_HEADER include/spdk/sock.h 00:02:33.148 TEST_HEADER include/spdk/stdinc.h 00:02:33.148 TEST_HEADER include/spdk/string.h 00:02:33.148 TEST_HEADER include/spdk/thread.h 00:02:33.148 TEST_HEADER include/spdk/trace.h 00:02:33.148 TEST_HEADER include/spdk/tree.h 00:02:33.148 TEST_HEADER include/spdk/trace_parser.h 00:02:33.148 TEST_HEADER include/spdk/ublk.h 00:02:33.148 TEST_HEADER include/spdk/util.h 00:02:33.148 TEST_HEADER include/spdk/uuid.h 00:02:33.148 TEST_HEADER include/spdk/version.h 00:02:33.148 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:33.148 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:33.148 TEST_HEADER include/spdk/vhost.h 00:02:33.148 TEST_HEADER include/spdk/vmd.h 00:02:33.148 TEST_HEADER include/spdk/zipf.h 00:02:33.148 TEST_HEADER include/spdk/xor.h 00:02:33.148 CXX test/cpp_headers/accel.o 00:02:33.148 CXX test/cpp_headers/accel_module.o 00:02:33.148 CXX test/cpp_headers/assert.o 00:02:33.148 CXX test/cpp_headers/barrier.o 00:02:33.148 CXX test/cpp_headers/base64.o 00:02:33.148 CXX test/cpp_headers/bdev_module.o 00:02:33.148 CXX test/cpp_headers/bdev.o 00:02:33.148 CXX test/cpp_headers/bdev_zone.o 00:02:33.148 CXX test/cpp_headers/bit_array.o 00:02:33.148 CXX test/cpp_headers/bit_pool.o 00:02:33.148 CXX test/cpp_headers/blob_bdev.o 00:02:33.148 CXX test/cpp_headers/blobfs.o 00:02:33.148 CXX test/cpp_headers/blobfs_bdev.o 00:02:33.148 CXX test/cpp_headers/blob.o 00:02:33.148 CXX test/cpp_headers/conf.o 00:02:33.148 CXX test/cpp_headers/config.o 00:02:33.148 CXX test/cpp_headers/cpuset.o 00:02:33.148 CXX test/cpp_headers/crc64.o 00:02:33.148 CXX test/cpp_headers/crc32.o 00:02:33.148 CXX test/cpp_headers/crc16.o 00:02:33.148 CXX test/cpp_headers/dif.o 00:02:33.148 CXX test/cpp_headers/endian.o 00:02:33.148 CXX test/cpp_headers/dma.o 00:02:33.148 CXX test/cpp_headers/env_dpdk.o 00:02:33.148 CXX test/cpp_headers/env.o 00:02:33.148 CXX test/cpp_headers/event.o 00:02:33.148 CXX test/cpp_headers/fd.o 00:02:33.148 CXX test/cpp_headers/fd_group.o 00:02:33.148 CXX test/cpp_headers/file.o 00:02:33.148 CXX test/cpp_headers/fsdev.o 00:02:33.148 CXX test/cpp_headers/fsdev_module.o 00:02:33.148 CXX test/cpp_headers/fuse_dispatcher.o 00:02:33.148 CXX test/cpp_headers/ftl.o 00:02:33.148 CXX test/cpp_headers/gpt_spec.o 00:02:33.148 CXX test/cpp_headers/hexlify.o 00:02:33.148 CXX test/cpp_headers/histogram_data.o 00:02:33.148 CXX test/cpp_headers/idxd.o 00:02:33.148 CXX test/cpp_headers/idxd_spec.o 00:02:33.148 CXX test/cpp_headers/ioat.o 00:02:33.148 CXX test/cpp_headers/init.o 00:02:33.148 CXX test/cpp_headers/ioat_spec.o 00:02:33.148 CXX test/cpp_headers/iscsi_spec.o 00:02:33.148 CXX test/cpp_headers/json.o 00:02:33.148 CXX test/cpp_headers/jsonrpc.o 00:02:33.148 CXX test/cpp_headers/keyring_module.o 00:02:33.148 CXX test/cpp_headers/likely.o 00:02:33.148 CXX test/cpp_headers/keyring.o 00:02:33.148 CC examples/util/zipf/zipf.o 00:02:33.148 CXX test/cpp_headers/log.o 00:02:33.148 CXX test/cpp_headers/lvol.o 00:02:33.148 LINK spdk_lspci 00:02:33.148 CXX test/cpp_headers/net.o 00:02:33.148 CXX test/cpp_headers/memory.o 00:02:33.148 CXX test/cpp_headers/nbd.o 00:02:33.148 CXX test/cpp_headers/mmio.o 00:02:33.148 CXX test/cpp_headers/md5.o 00:02:33.148 CXX test/cpp_headers/notify.o 00:02:33.148 CXX test/cpp_headers/nvme.o 00:02:33.148 CXX test/cpp_headers/nvme_intel.o 00:02:33.148 CXX test/cpp_headers/nvme_ocssd.o 00:02:33.148 CXX test/cpp_headers/nvme_spec.o 00:02:33.148 CXX test/cpp_headers/nvmf_cmd.o 00:02:33.148 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:33.148 CC test/thread/poller_perf/poller_perf.o 00:02:33.148 CXX test/cpp_headers/nvme_zns.o 00:02:33.148 CXX test/cpp_headers/nvmf.o 00:02:33.148 CC examples/ioat/verify/verify.o 00:02:33.148 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:33.148 CXX test/cpp_headers/nvmf_spec.o 00:02:33.148 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:33.148 CXX test/cpp_headers/nvmf_transport.o 00:02:33.148 CXX test/cpp_headers/opal.o 00:02:33.148 CC test/env/vtophys/vtophys.o 00:02:33.148 CXX test/cpp_headers/pci_ids.o 00:02:33.148 CXX test/cpp_headers/opal_spec.o 00:02:33.148 CC test/env/memory/memory_ut.o 00:02:33.148 CXX test/cpp_headers/pipe.o 00:02:33.148 CXX test/cpp_headers/queue.o 00:02:33.148 CXX test/cpp_headers/reduce.o 00:02:33.148 CXX test/cpp_headers/rpc.o 00:02:33.148 CXX test/cpp_headers/scsi.o 00:02:33.148 CXX test/cpp_headers/scheduler.o 00:02:33.415 CXX test/cpp_headers/sock.o 00:02:33.415 CC test/app/jsoncat/jsoncat.o 00:02:33.415 CXX test/cpp_headers/scsi_spec.o 00:02:33.415 CXX test/cpp_headers/trace.o 00:02:33.415 CXX test/cpp_headers/stdinc.o 00:02:33.415 CC test/dma/test_dma/test_dma.o 00:02:33.415 CC examples/ioat/perf/perf.o 00:02:33.415 CXX test/cpp_headers/string.o 00:02:33.415 CC app/fio/nvme/fio_plugin.o 00:02:33.415 CXX test/cpp_headers/thread.o 00:02:33.415 CXX test/cpp_headers/ublk.o 00:02:33.415 CXX test/cpp_headers/trace_parser.o 00:02:33.415 CXX test/cpp_headers/tree.o 00:02:33.415 CXX test/cpp_headers/util.o 00:02:33.415 CC test/app/stub/stub.o 00:02:33.415 CXX test/cpp_headers/version.o 00:02:33.415 CXX test/cpp_headers/uuid.o 00:02:33.415 CXX test/cpp_headers/vfio_user_pci.o 00:02:33.415 CXX test/cpp_headers/vfio_user_spec.o 00:02:33.415 CXX test/cpp_headers/xor.o 00:02:33.415 CXX test/cpp_headers/zipf.o 00:02:33.415 CXX test/cpp_headers/vhost.o 00:02:33.415 CXX test/cpp_headers/vmd.o 00:02:33.415 CC test/env/pci/pci_ut.o 00:02:33.415 CC test/app/histogram_perf/histogram_perf.o 00:02:33.415 LINK rpc_client_test 00:02:33.415 CC app/fio/bdev/fio_plugin.o 00:02:33.415 LINK spdk_nvme_discover 00:02:33.415 CC test/app/bdev_svc/bdev_svc.o 00:02:33.415 LINK nvmf_tgt 00:02:33.415 LINK interrupt_tgt 00:02:33.677 LINK iscsi_tgt 00:02:33.678 LINK spdk_trace_record 00:02:33.941 LINK spdk_tgt 00:02:33.941 CC test/env/mem_callbacks/mem_callbacks.o 00:02:33.941 LINK poller_perf 00:02:33.941 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:33.941 LINK histogram_perf 00:02:33.941 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:33.941 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:33.941 LINK spdk_dd 00:02:33.941 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:34.202 LINK bdev_svc 00:02:34.202 LINK zipf 00:02:34.202 LINK vtophys 00:02:34.202 LINK jsoncat 00:02:34.202 LINK spdk_trace 00:02:34.202 LINK env_dpdk_post_init 00:02:34.462 LINK stub 00:02:34.462 LINK ioat_perf 00:02:34.462 LINK verify 00:02:34.462 LINK spdk_nvme 00:02:34.722 LINK pci_ut 00:02:34.722 LINK nvme_fuzz 00:02:34.722 CC test/event/event_perf/event_perf.o 00:02:34.722 CC test/event/reactor/reactor.o 00:02:34.722 CC test/event/reactor_perf/reactor_perf.o 00:02:34.722 CC examples/idxd/perf/perf.o 00:02:34.722 CC examples/sock/hello_world/hello_sock.o 00:02:34.722 CC examples/vmd/lsvmd/lsvmd.o 00:02:34.722 CC examples/vmd/led/led.o 00:02:34.722 LINK spdk_top 00:02:34.722 CC test/event/app_repeat/app_repeat.o 00:02:34.722 CC test/event/scheduler/scheduler.o 00:02:34.722 LINK spdk_nvme_perf 00:02:34.722 LINK spdk_bdev 00:02:34.722 CC examples/thread/thread/thread_ex.o 00:02:34.722 CC app/vhost/vhost.o 00:02:34.722 LINK test_dma 00:02:34.722 LINK vhost_fuzz 00:02:34.983 LINK mem_callbacks 00:02:34.983 LINK spdk_nvme_identify 00:02:34.983 LINK reactor 00:02:34.983 LINK event_perf 00:02:34.983 LINK reactor_perf 00:02:34.983 LINK lsvmd 00:02:34.983 LINK led 00:02:34.983 LINK app_repeat 00:02:34.983 LINK hello_sock 00:02:34.983 LINK vhost 00:02:34.983 LINK scheduler 00:02:34.983 LINK thread 00:02:34.983 LINK idxd_perf 00:02:35.553 LINK memory_ut 00:02:35.553 CC test/nvme/overhead/overhead.o 00:02:35.553 CC test/nvme/reset/reset.o 00:02:35.553 CC test/nvme/sgl/sgl.o 00:02:35.553 CC test/nvme/err_injection/err_injection.o 00:02:35.553 CC test/nvme/startup/startup.o 00:02:35.553 CC test/nvme/aer/aer.o 00:02:35.553 CC test/nvme/simple_copy/simple_copy.o 00:02:35.553 CC test/nvme/fused_ordering/fused_ordering.o 00:02:35.553 CC test/nvme/e2edp/nvme_dp.o 00:02:35.553 CC test/nvme/connect_stress/connect_stress.o 00:02:35.553 CC test/nvme/reserve/reserve.o 00:02:35.553 CC test/nvme/fdp/fdp.o 00:02:35.553 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:35.553 CC test/nvme/boot_partition/boot_partition.o 00:02:35.553 CC test/nvme/compliance/nvme_compliance.o 00:02:35.553 CC test/accel/dif/dif.o 00:02:35.553 CC test/nvme/cuse/cuse.o 00:02:35.553 CC test/blobfs/mkfs/mkfs.o 00:02:35.553 CC examples/nvme/hotplug/hotplug.o 00:02:35.553 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:35.553 CC examples/nvme/hello_world/hello_world.o 00:02:35.553 CC examples/nvme/arbitration/arbitration.o 00:02:35.553 CC examples/nvme/reconnect/reconnect.o 00:02:35.553 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:35.553 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:35.553 CC test/lvol/esnap/esnap.o 00:02:35.553 CC examples/nvme/abort/abort.o 00:02:35.814 LINK startup 00:02:35.814 LINK boot_partition 00:02:35.814 LINK err_injection 00:02:35.814 LINK doorbell_aers 00:02:35.814 LINK connect_stress 00:02:35.814 LINK fused_ordering 00:02:35.814 CC examples/accel/perf/accel_perf.o 00:02:35.814 LINK reserve 00:02:35.814 LINK mkfs 00:02:35.814 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:35.814 CC examples/blob/hello_world/hello_blob.o 00:02:35.814 LINK simple_copy 00:02:35.814 LINK sgl 00:02:35.814 CC examples/blob/cli/blobcli.o 00:02:35.814 LINK reset 00:02:35.814 LINK nvme_dp 00:02:35.814 LINK overhead 00:02:35.814 LINK aer 00:02:35.814 LINK nvme_compliance 00:02:35.814 LINK fdp 00:02:35.814 LINK pmr_persistence 00:02:35.814 LINK cmb_copy 00:02:35.814 LINK hello_world 00:02:35.814 LINK hotplug 00:02:35.814 LINK iscsi_fuzz 00:02:36.076 LINK arbitration 00:02:36.076 LINK abort 00:02:36.076 LINK reconnect 00:02:36.076 LINK hello_blob 00:02:36.076 LINK hello_fsdev 00:02:36.076 LINK nvme_manage 00:02:36.076 LINK dif 00:02:36.336 LINK accel_perf 00:02:36.336 LINK blobcli 00:02:36.909 LINK cuse 00:02:36.909 CC test/bdev/bdevio/bdevio.o 00:02:36.909 CC examples/bdev/hello_world/hello_bdev.o 00:02:36.909 CC examples/bdev/bdevperf/bdevperf.o 00:02:37.170 LINK hello_bdev 00:02:37.170 LINK bdevio 00:02:37.742 LINK bdevperf 00:02:38.314 CC examples/nvmf/nvmf/nvmf.o 00:02:38.575 LINK nvmf 00:02:40.490 LINK esnap 00:02:40.490 00:02:40.490 real 0m57.056s 00:02:40.490 user 8m9.605s 00:02:40.490 sys 5m36.415s 00:02:40.490 13:10:27 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:40.490 13:10:27 make -- common/autotest_common.sh@10 -- $ set +x 00:02:40.490 ************************************ 00:02:40.490 END TEST make 00:02:40.490 ************************************ 00:02:40.490 13:10:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:40.490 13:10:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:40.490 13:10:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:40.490 13:10:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.490 13:10:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:40.490 13:10:27 -- pm/common@44 -- $ pid=1820949 00:02:40.490 13:10:27 -- pm/common@50 -- $ kill -TERM 1820949 00:02:40.490 13:10:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.490 13:10:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:40.490 13:10:27 -- pm/common@44 -- $ pid=1820950 00:02:40.490 13:10:27 -- pm/common@50 -- $ kill -TERM 1820950 00:02:40.490 13:10:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.490 13:10:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:40.490 13:10:27 -- pm/common@44 -- $ pid=1820952 00:02:40.490 13:10:27 -- pm/common@50 -- $ kill -TERM 1820952 00:02:40.490 13:10:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.490 13:10:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:40.490 13:10:27 -- pm/common@44 -- $ pid=1820975 00:02:40.490 13:10:27 -- pm/common@50 -- $ sudo -E kill -TERM 1820975 00:02:40.490 13:10:27 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:40.490 13:10:27 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:40.751 13:10:27 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:40.751 13:10:27 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:40.751 13:10:27 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:40.751 13:10:27 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:40.751 13:10:27 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:40.751 13:10:27 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:40.751 13:10:27 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:40.751 13:10:27 -- scripts/common.sh@336 -- # IFS=.-: 00:02:40.751 13:10:27 -- scripts/common.sh@336 -- # read -ra ver1 00:02:40.752 13:10:27 -- scripts/common.sh@337 -- # IFS=.-: 00:02:40.752 13:10:27 -- scripts/common.sh@337 -- # read -ra ver2 00:02:40.752 13:10:27 -- scripts/common.sh@338 -- # local 'op=<' 00:02:40.752 13:10:27 -- scripts/common.sh@340 -- # ver1_l=2 00:02:40.752 13:10:27 -- scripts/common.sh@341 -- # ver2_l=1 00:02:40.752 13:10:27 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:40.752 13:10:27 -- scripts/common.sh@344 -- # case "$op" in 00:02:40.752 13:10:27 -- scripts/common.sh@345 -- # : 1 00:02:40.752 13:10:27 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:40.752 13:10:27 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:40.752 13:10:27 -- scripts/common.sh@365 -- # decimal 1 00:02:40.752 13:10:27 -- scripts/common.sh@353 -- # local d=1 00:02:40.752 13:10:27 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:40.752 13:10:27 -- scripts/common.sh@355 -- # echo 1 00:02:40.752 13:10:27 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:40.752 13:10:27 -- scripts/common.sh@366 -- # decimal 2 00:02:40.752 13:10:27 -- scripts/common.sh@353 -- # local d=2 00:02:40.752 13:10:27 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:40.752 13:10:27 -- scripts/common.sh@355 -- # echo 2 00:02:40.752 13:10:27 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:40.752 13:10:27 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:40.752 13:10:27 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:40.752 13:10:27 -- scripts/common.sh@368 -- # return 0 00:02:40.752 13:10:27 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:40.752 13:10:27 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:40.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:40.752 --rc genhtml_branch_coverage=1 00:02:40.752 --rc genhtml_function_coverage=1 00:02:40.752 --rc genhtml_legend=1 00:02:40.752 --rc geninfo_all_blocks=1 00:02:40.752 --rc geninfo_unexecuted_blocks=1 00:02:40.752 00:02:40.752 ' 00:02:40.752 13:10:27 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:40.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:40.752 --rc genhtml_branch_coverage=1 00:02:40.752 --rc genhtml_function_coverage=1 00:02:40.752 --rc genhtml_legend=1 00:02:40.752 --rc geninfo_all_blocks=1 00:02:40.752 --rc geninfo_unexecuted_blocks=1 00:02:40.752 00:02:40.752 ' 00:02:40.752 13:10:27 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:40.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:40.752 --rc genhtml_branch_coverage=1 00:02:40.752 --rc genhtml_function_coverage=1 00:02:40.752 --rc genhtml_legend=1 00:02:40.752 --rc geninfo_all_blocks=1 00:02:40.752 --rc geninfo_unexecuted_blocks=1 00:02:40.752 00:02:40.752 ' 00:02:40.752 13:10:27 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:40.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:40.752 --rc genhtml_branch_coverage=1 00:02:40.752 --rc genhtml_function_coverage=1 00:02:40.752 --rc genhtml_legend=1 00:02:40.752 --rc geninfo_all_blocks=1 00:02:40.752 --rc geninfo_unexecuted_blocks=1 00:02:40.752 00:02:40.752 ' 00:02:40.752 13:10:27 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:40.752 13:10:27 -- nvmf/common.sh@7 -- # uname -s 00:02:40.752 13:10:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:40.752 13:10:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:40.752 13:10:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:40.752 13:10:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:40.752 13:10:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:40.752 13:10:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:40.752 13:10:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:40.752 13:10:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:40.752 13:10:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:40.752 13:10:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:40.752 13:10:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:40.752 13:10:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:40.752 13:10:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:40.752 13:10:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:40.752 13:10:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:40.752 13:10:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:40.752 13:10:27 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:40.752 13:10:27 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:40.752 13:10:27 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:40.752 13:10:27 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:40.752 13:10:27 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:40.752 13:10:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.752 13:10:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.752 13:10:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.752 13:10:27 -- paths/export.sh@5 -- # export PATH 00:02:40.752 13:10:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.752 13:10:27 -- nvmf/common.sh@51 -- # : 0 00:02:40.752 13:10:27 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:40.752 13:10:27 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:40.752 13:10:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:40.752 13:10:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:40.752 13:10:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:40.752 13:10:27 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:40.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:40.752 13:10:27 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:40.752 13:10:27 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:40.752 13:10:27 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:40.752 13:10:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:40.752 13:10:27 -- spdk/autotest.sh@32 -- # uname -s 00:02:40.752 13:10:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:40.752 13:10:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:40.752 13:10:27 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:40.752 13:10:27 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:40.752 13:10:27 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:40.752 13:10:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:40.752 13:10:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:40.752 13:10:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:40.752 13:10:27 -- spdk/autotest.sh@48 -- # udevadm_pid=1887108 00:02:40.752 13:10:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:40.753 13:10:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:40.753 13:10:27 -- pm/common@17 -- # local monitor 00:02:40.753 13:10:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.753 13:10:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.753 13:10:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.753 13:10:27 -- pm/common@21 -- # date +%s 00:02:40.753 13:10:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.753 13:10:27 -- pm/common@21 -- # date +%s 00:02:40.753 13:10:27 -- pm/common@25 -- # sleep 1 00:02:40.753 13:10:27 -- pm/common@21 -- # date +%s 00:02:40.753 13:10:27 -- pm/common@21 -- # date +%s 00:02:40.753 13:10:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733487027 00:02:40.753 13:10:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733487027 00:02:40.753 13:10:27 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733487027 00:02:40.753 13:10:27 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733487027 00:02:41.014 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733487027_collect-cpu-load.pm.log 00:02:41.014 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733487027_collect-vmstat.pm.log 00:02:41.014 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733487027_collect-cpu-temp.pm.log 00:02:41.014 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733487027_collect-bmc-pm.bmc.pm.log 00:02:41.957 13:10:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:41.957 13:10:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:41.957 13:10:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:41.957 13:10:28 -- common/autotest_common.sh@10 -- # set +x 00:02:41.957 13:10:28 -- spdk/autotest.sh@59 -- # create_test_list 00:02:41.957 13:10:28 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:41.957 13:10:28 -- common/autotest_common.sh@10 -- # set +x 00:02:41.957 13:10:28 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:41.957 13:10:28 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.957 13:10:28 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.957 13:10:28 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:41.957 13:10:28 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.957 13:10:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:41.957 13:10:28 -- common/autotest_common.sh@1457 -- # uname 00:02:41.957 13:10:28 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:41.957 13:10:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:41.957 13:10:28 -- common/autotest_common.sh@1477 -- # uname 00:02:41.957 13:10:28 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:41.957 13:10:28 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:41.957 13:10:28 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:41.957 lcov: LCOV version 1.15 00:02:41.957 13:10:28 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:56.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:56.862 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:14.976 13:10:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:14.976 13:10:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:14.976 13:10:59 -- common/autotest_common.sh@10 -- # set +x 00:03:14.976 13:10:59 -- spdk/autotest.sh@78 -- # rm -f 00:03:14.976 13:10:59 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.917 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:16.178 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:16.178 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:16.178 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:16.178 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:16.178 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:16.178 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:16.178 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:16.178 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:16.178 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:16.178 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:16.178 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:16.439 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:16.439 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:16.439 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:16.439 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:16.439 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:16.439 13:11:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:16.439 13:11:02 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:16.439 13:11:02 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:16.439 13:11:02 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:16.439 13:11:02 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:16.439 13:11:02 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:16.439 13:11:02 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:16.439 13:11:02 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:03:16.439 13:11:02 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:16.439 13:11:02 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:16.439 13:11:02 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:16.439 13:11:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.439 13:11:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:16.439 13:11:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:16.439 13:11:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:16.439 13:11:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:16.439 13:11:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:16.439 13:11:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:16.439 13:11:02 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:16.439 No valid GPT data, bailing 00:03:16.439 13:11:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:16.439 13:11:03 -- scripts/common.sh@394 -- # pt= 00:03:16.439 13:11:03 -- scripts/common.sh@395 -- # return 1 00:03:16.439 13:11:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:16.439 1+0 records in 00:03:16.439 1+0 records out 00:03:16.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509406 s, 206 MB/s 00:03:16.439 13:11:03 -- spdk/autotest.sh@105 -- # sync 00:03:16.439 13:11:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:16.439 13:11:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:16.439 13:11:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:26.438 13:11:11 -- spdk/autotest.sh@111 -- # uname -s 00:03:26.438 13:11:11 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:26.438 13:11:11 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:26.438 13:11:11 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:28.353 Hugepages 00:03:28.353 node hugesize free / total 00:03:28.353 node0 1048576kB 0 / 0 00:03:28.353 node0 2048kB 0 / 0 00:03:28.353 node1 1048576kB 0 / 0 00:03:28.353 node1 2048kB 0 / 0 00:03:28.353 00:03:28.353 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:28.614 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:28.614 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:28.614 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:28.614 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:28.614 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:28.614 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:28.614 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:28.614 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:28.614 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:28.614 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:28.614 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:28.614 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:28.614 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:28.614 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:28.614 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:28.614 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:28.614 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:28.614 13:11:15 -- spdk/autotest.sh@117 -- # uname -s 00:03:28.614 13:11:15 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:28.614 13:11:15 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:28.614 13:11:15 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.823 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:32.823 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:34.213 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:34.213 13:11:20 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:35.294 13:11:21 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:35.294 13:11:21 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:35.294 13:11:21 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:35.294 13:11:21 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:35.294 13:11:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:35.294 13:11:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:35.294 13:11:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:35.294 13:11:21 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:35.294 13:11:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:35.294 13:11:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:35.294 13:11:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:35.294 13:11:21 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.594 Waiting for block devices as requested 00:03:38.594 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:38.594 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:38.854 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:38.854 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:38.854 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:39.114 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:39.114 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:39.114 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:39.374 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:39.374 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:39.374 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:39.634 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:39.634 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:39.634 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:39.895 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:39.895 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:39.895 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:40.156 13:11:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:40.156 13:11:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:40.156 13:11:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:40.156 13:11:26 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:40.156 13:11:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:40.156 13:11:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:40.156 13:11:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:40.156 13:11:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:40.156 13:11:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:40.156 13:11:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:40.156 13:11:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:40.156 13:11:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:40.156 13:11:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:40.156 13:11:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:40.156 13:11:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:40.156 13:11:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:40.156 13:11:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:40.156 13:11:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:40.156 13:11:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:40.156 13:11:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:40.156 13:11:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:40.156 13:11:26 -- common/autotest_common.sh@1543 -- # continue 00:03:40.156 13:11:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:40.156 13:11:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:40.156 13:11:26 -- common/autotest_common.sh@10 -- # set +x 00:03:40.156 13:11:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:40.156 13:11:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.156 13:11:26 -- common/autotest_common.sh@10 -- # set +x 00:03:40.156 13:11:26 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.457 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:43.457 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:43.457 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:43.457 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:43.457 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:43.717 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:43.717 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:43.717 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:43.717 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:43.717 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:43.717 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:43.717 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:43.717 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:43.717 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:43.717 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:43.717 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:43.717 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:43.717 13:11:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:43.717 13:11:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:43.717 13:11:30 -- common/autotest_common.sh@10 -- # set +x 00:03:43.978 13:11:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:43.978 13:11:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:43.978 13:11:30 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:43.978 13:11:30 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:43.978 13:11:30 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:43.978 13:11:30 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:43.978 13:11:30 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:43.978 13:11:30 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:43.978 13:11:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:43.978 13:11:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:43.978 13:11:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:43.978 13:11:30 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:43.978 13:11:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:43.978 13:11:30 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:43.978 13:11:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:43.978 13:11:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:43.978 13:11:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:43.978 13:11:30 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:43.978 13:11:30 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:43.978 13:11:30 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:43.978 13:11:30 -- common/autotest_common.sh@1572 -- # return 0 00:03:43.978 13:11:30 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:43.978 13:11:30 -- common/autotest_common.sh@1580 -- # return 0 00:03:43.978 13:11:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:43.978 13:11:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:43.978 13:11:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:43.978 13:11:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:43.978 13:11:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:43.978 13:11:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:43.978 13:11:30 -- common/autotest_common.sh@10 -- # set +x 00:03:43.978 13:11:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:43.978 13:11:30 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:43.978 13:11:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.978 13:11:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.978 13:11:30 -- common/autotest_common.sh@10 -- # set +x 00:03:43.978 ************************************ 00:03:43.978 START TEST env 00:03:43.978 ************************************ 00:03:43.978 13:11:30 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:44.239 * Looking for test storage... 00:03:44.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:44.239 13:11:30 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:44.239 13:11:30 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:44.239 13:11:30 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:44.239 13:11:30 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:44.239 13:11:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.239 13:11:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.239 13:11:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.239 13:11:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.239 13:11:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.239 13:11:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.239 13:11:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.239 13:11:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.239 13:11:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.239 13:11:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.239 13:11:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.239 13:11:30 env -- scripts/common.sh@344 -- # case "$op" in 00:03:44.239 13:11:30 env -- scripts/common.sh@345 -- # : 1 00:03:44.239 13:11:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.239 13:11:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.239 13:11:30 env -- scripts/common.sh@365 -- # decimal 1 00:03:44.239 13:11:30 env -- scripts/common.sh@353 -- # local d=1 00:03:44.239 13:11:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.239 13:11:30 env -- scripts/common.sh@355 -- # echo 1 00:03:44.239 13:11:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.239 13:11:30 env -- scripts/common.sh@366 -- # decimal 2 00:03:44.239 13:11:30 env -- scripts/common.sh@353 -- # local d=2 00:03:44.239 13:11:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.239 13:11:30 env -- scripts/common.sh@355 -- # echo 2 00:03:44.239 13:11:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.239 13:11:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.239 13:11:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.239 13:11:30 env -- scripts/common.sh@368 -- # return 0 00:03:44.239 13:11:30 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.239 13:11:30 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:44.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.239 --rc genhtml_branch_coverage=1 00:03:44.239 --rc genhtml_function_coverage=1 00:03:44.239 --rc genhtml_legend=1 00:03:44.239 --rc geninfo_all_blocks=1 00:03:44.239 --rc geninfo_unexecuted_blocks=1 00:03:44.239 00:03:44.239 ' 00:03:44.239 13:11:30 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:44.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.239 --rc genhtml_branch_coverage=1 00:03:44.239 --rc genhtml_function_coverage=1 00:03:44.239 --rc genhtml_legend=1 00:03:44.239 --rc geninfo_all_blocks=1 00:03:44.240 --rc geninfo_unexecuted_blocks=1 00:03:44.240 00:03:44.240 ' 00:03:44.240 13:11:30 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:44.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.240 --rc genhtml_branch_coverage=1 00:03:44.240 --rc genhtml_function_coverage=1 00:03:44.240 --rc genhtml_legend=1 00:03:44.240 --rc geninfo_all_blocks=1 00:03:44.240 --rc geninfo_unexecuted_blocks=1 00:03:44.240 00:03:44.240 ' 00:03:44.240 13:11:30 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:44.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.240 --rc genhtml_branch_coverage=1 00:03:44.240 --rc genhtml_function_coverage=1 00:03:44.240 --rc genhtml_legend=1 00:03:44.240 --rc geninfo_all_blocks=1 00:03:44.240 --rc geninfo_unexecuted_blocks=1 00:03:44.240 00:03:44.240 ' 00:03:44.240 13:11:30 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:44.240 13:11:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.240 13:11:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.240 13:11:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.240 ************************************ 00:03:44.240 START TEST env_memory 00:03:44.240 ************************************ 00:03:44.240 13:11:30 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:44.240 00:03:44.240 00:03:44.240 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.240 http://cunit.sourceforge.net/ 00:03:44.240 00:03:44.240 00:03:44.240 Suite: memory 00:03:44.240 Test: alloc and free memory map ...[2024-12-06 13:11:30.839410] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:44.240 passed 00:03:44.240 Test: mem map translation ...[2024-12-06 13:11:30.865024] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:44.240 [2024-12-06 13:11:30.865054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:44.240 [2024-12-06 13:11:30.865102] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:44.240 [2024-12-06 13:11:30.865114] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:44.502 passed 00:03:44.502 Test: mem map registration ...[2024-12-06 13:11:30.920384] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:44.502 [2024-12-06 13:11:30.920409] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:44.502 passed 00:03:44.502 Test: mem map adjacent registrations ...passed 00:03:44.502 00:03:44.502 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.502 suites 1 1 n/a 0 0 00:03:44.502 tests 4 4 4 0 0 00:03:44.502 asserts 152 152 152 0 n/a 00:03:44.502 00:03:44.502 Elapsed time = 0.192 seconds 00:03:44.502 00:03:44.502 real 0m0.209s 00:03:44.502 user 0m0.194s 00:03:44.502 sys 0m0.013s 00:03:44.502 13:11:30 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.502 13:11:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:44.502 ************************************ 00:03:44.502 END TEST env_memory 00:03:44.502 ************************************ 00:03:44.502 13:11:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:44.502 13:11:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.502 13:11:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.502 13:11:31 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.502 ************************************ 00:03:44.502 START TEST env_vtophys 00:03:44.502 ************************************ 00:03:44.502 13:11:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:44.502 EAL: lib.eal log level changed from notice to debug 00:03:44.502 EAL: Detected lcore 0 as core 0 on socket 0 00:03:44.502 EAL: Detected lcore 1 as core 1 on socket 0 00:03:44.502 EAL: Detected lcore 2 as core 2 on socket 0 00:03:44.502 EAL: Detected lcore 3 as core 3 on socket 0 00:03:44.502 EAL: Detected lcore 4 as core 4 on socket 0 00:03:44.502 EAL: Detected lcore 5 as core 5 on socket 0 00:03:44.502 EAL: Detected lcore 6 as core 6 on socket 0 00:03:44.502 EAL: Detected lcore 7 as core 7 on socket 0 00:03:44.502 EAL: Detected lcore 8 as core 8 on socket 0 00:03:44.502 EAL: Detected lcore 9 as core 9 on socket 0 00:03:44.502 EAL: Detected lcore 10 as core 10 on socket 0 00:03:44.502 EAL: Detected lcore 11 as core 11 on socket 0 00:03:44.502 EAL: Detected lcore 12 as core 12 on socket 0 00:03:44.502 EAL: Detected lcore 13 as core 13 on socket 0 00:03:44.502 EAL: Detected lcore 14 as core 14 on socket 0 00:03:44.502 EAL: Detected lcore 15 as core 15 on socket 0 00:03:44.502 EAL: Detected lcore 16 as core 16 on socket 0 00:03:44.502 EAL: Detected lcore 17 as core 17 on socket 0 00:03:44.502 EAL: Detected lcore 18 as core 18 on socket 0 00:03:44.502 EAL: Detected lcore 19 as core 19 on socket 0 00:03:44.502 EAL: Detected lcore 20 as core 20 on socket 0 00:03:44.502 EAL: Detected lcore 21 as core 21 on socket 0 00:03:44.502 EAL: Detected lcore 22 as core 22 on socket 0 00:03:44.502 EAL: Detected lcore 23 as core 23 on socket 0 00:03:44.502 EAL: Detected lcore 24 as core 24 on socket 0 00:03:44.502 EAL: Detected lcore 25 as core 25 on socket 0 00:03:44.502 EAL: Detected lcore 26 as core 26 on socket 0 00:03:44.502 EAL: Detected lcore 27 as core 27 on socket 0 00:03:44.502 EAL: Detected lcore 28 as core 28 on socket 0 00:03:44.502 EAL: Detected lcore 29 as core 29 on socket 0 00:03:44.502 EAL: Detected lcore 30 as core 30 on socket 0 00:03:44.502 EAL: Detected lcore 31 as core 31 on socket 0 00:03:44.502 EAL: Detected lcore 32 as core 32 on socket 0 00:03:44.502 EAL: Detected lcore 33 as core 33 on socket 0 00:03:44.502 EAL: Detected lcore 34 as core 34 on socket 0 00:03:44.502 EAL: Detected lcore 35 as core 35 on socket 0 00:03:44.502 EAL: Detected lcore 36 as core 0 on socket 1 00:03:44.502 EAL: Detected lcore 37 as core 1 on socket 1 00:03:44.502 EAL: Detected lcore 38 as core 2 on socket 1 00:03:44.502 EAL: Detected lcore 39 as core 3 on socket 1 00:03:44.502 EAL: Detected lcore 40 as core 4 on socket 1 00:03:44.502 EAL: Detected lcore 41 as core 5 on socket 1 00:03:44.502 EAL: Detected lcore 42 as core 6 on socket 1 00:03:44.502 EAL: Detected lcore 43 as core 7 on socket 1 00:03:44.502 EAL: Detected lcore 44 as core 8 on socket 1 00:03:44.502 EAL: Detected lcore 45 as core 9 on socket 1 00:03:44.502 EAL: Detected lcore 46 as core 10 on socket 1 00:03:44.502 EAL: Detected lcore 47 as core 11 on socket 1 00:03:44.502 EAL: Detected lcore 48 as core 12 on socket 1 00:03:44.502 EAL: Detected lcore 49 as core 13 on socket 1 00:03:44.502 EAL: Detected lcore 50 as core 14 on socket 1 00:03:44.502 EAL: Detected lcore 51 as core 15 on socket 1 00:03:44.502 EAL: Detected lcore 52 as core 16 on socket 1 00:03:44.502 EAL: Detected lcore 53 as core 17 on socket 1 00:03:44.502 EAL: Detected lcore 54 as core 18 on socket 1 00:03:44.502 EAL: Detected lcore 55 as core 19 on socket 1 00:03:44.502 EAL: Detected lcore 56 as core 20 on socket 1 00:03:44.502 EAL: Detected lcore 57 as core 21 on socket 1 00:03:44.502 EAL: Detected lcore 58 as core 22 on socket 1 00:03:44.502 EAL: Detected lcore 59 as core 23 on socket 1 00:03:44.502 EAL: Detected lcore 60 as core 24 on socket 1 00:03:44.502 EAL: Detected lcore 61 as core 25 on socket 1 00:03:44.502 EAL: Detected lcore 62 as core 26 on socket 1 00:03:44.502 EAL: Detected lcore 63 as core 27 on socket 1 00:03:44.502 EAL: Detected lcore 64 as core 28 on socket 1 00:03:44.502 EAL: Detected lcore 65 as core 29 on socket 1 00:03:44.502 EAL: Detected lcore 66 as core 30 on socket 1 00:03:44.502 EAL: Detected lcore 67 as core 31 on socket 1 00:03:44.502 EAL: Detected lcore 68 as core 32 on socket 1 00:03:44.502 EAL: Detected lcore 69 as core 33 on socket 1 00:03:44.502 EAL: Detected lcore 70 as core 34 on socket 1 00:03:44.502 EAL: Detected lcore 71 as core 35 on socket 1 00:03:44.502 EAL: Detected lcore 72 as core 0 on socket 0 00:03:44.502 EAL: Detected lcore 73 as core 1 on socket 0 00:03:44.502 EAL: Detected lcore 74 as core 2 on socket 0 00:03:44.502 EAL: Detected lcore 75 as core 3 on socket 0 00:03:44.502 EAL: Detected lcore 76 as core 4 on socket 0 00:03:44.502 EAL: Detected lcore 77 as core 5 on socket 0 00:03:44.502 EAL: Detected lcore 78 as core 6 on socket 0 00:03:44.502 EAL: Detected lcore 79 as core 7 on socket 0 00:03:44.502 EAL: Detected lcore 80 as core 8 on socket 0 00:03:44.502 EAL: Detected lcore 81 as core 9 on socket 0 00:03:44.502 EAL: Detected lcore 82 as core 10 on socket 0 00:03:44.502 EAL: Detected lcore 83 as core 11 on socket 0 00:03:44.502 EAL: Detected lcore 84 as core 12 on socket 0 00:03:44.502 EAL: Detected lcore 85 as core 13 on socket 0 00:03:44.502 EAL: Detected lcore 86 as core 14 on socket 0 00:03:44.503 EAL: Detected lcore 87 as core 15 on socket 0 00:03:44.503 EAL: Detected lcore 88 as core 16 on socket 0 00:03:44.503 EAL: Detected lcore 89 as core 17 on socket 0 00:03:44.503 EAL: Detected lcore 90 as core 18 on socket 0 00:03:44.503 EAL: Detected lcore 91 as core 19 on socket 0 00:03:44.503 EAL: Detected lcore 92 as core 20 on socket 0 00:03:44.503 EAL: Detected lcore 93 as core 21 on socket 0 00:03:44.503 EAL: Detected lcore 94 as core 22 on socket 0 00:03:44.503 EAL: Detected lcore 95 as core 23 on socket 0 00:03:44.503 EAL: Detected lcore 96 as core 24 on socket 0 00:03:44.503 EAL: Detected lcore 97 as core 25 on socket 0 00:03:44.503 EAL: Detected lcore 98 as core 26 on socket 0 00:03:44.503 EAL: Detected lcore 99 as core 27 on socket 0 00:03:44.503 EAL: Detected lcore 100 as core 28 on socket 0 00:03:44.503 EAL: Detected lcore 101 as core 29 on socket 0 00:03:44.503 EAL: Detected lcore 102 as core 30 on socket 0 00:03:44.503 EAL: Detected lcore 103 as core 31 on socket 0 00:03:44.503 EAL: Detected lcore 104 as core 32 on socket 0 00:03:44.503 EAL: Detected lcore 105 as core 33 on socket 0 00:03:44.503 EAL: Detected lcore 106 as core 34 on socket 0 00:03:44.503 EAL: Detected lcore 107 as core 35 on socket 0 00:03:44.503 EAL: Detected lcore 108 as core 0 on socket 1 00:03:44.503 EAL: Detected lcore 109 as core 1 on socket 1 00:03:44.503 EAL: Detected lcore 110 as core 2 on socket 1 00:03:44.503 EAL: Detected lcore 111 as core 3 on socket 1 00:03:44.503 EAL: Detected lcore 112 as core 4 on socket 1 00:03:44.503 EAL: Detected lcore 113 as core 5 on socket 1 00:03:44.503 EAL: Detected lcore 114 as core 6 on socket 1 00:03:44.503 EAL: Detected lcore 115 as core 7 on socket 1 00:03:44.503 EAL: Detected lcore 116 as core 8 on socket 1 00:03:44.503 EAL: Detected lcore 117 as core 9 on socket 1 00:03:44.503 EAL: Detected lcore 118 as core 10 on socket 1 00:03:44.503 EAL: Detected lcore 119 as core 11 on socket 1 00:03:44.503 EAL: Detected lcore 120 as core 12 on socket 1 00:03:44.503 EAL: Detected lcore 121 as core 13 on socket 1 00:03:44.503 EAL: Detected lcore 122 as core 14 on socket 1 00:03:44.503 EAL: Detected lcore 123 as core 15 on socket 1 00:03:44.503 EAL: Detected lcore 124 as core 16 on socket 1 00:03:44.503 EAL: Detected lcore 125 as core 17 on socket 1 00:03:44.503 EAL: Detected lcore 126 as core 18 on socket 1 00:03:44.503 EAL: Detected lcore 127 as core 19 on socket 1 00:03:44.503 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:44.503 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:44.503 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:44.503 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:44.503 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:44.503 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:44.503 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:44.503 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:44.503 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:44.503 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:44.503 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:44.503 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:44.503 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:44.503 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:44.503 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:44.503 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:44.503 EAL: Maximum logical cores by configuration: 128 00:03:44.503 EAL: Detected CPU lcores: 128 00:03:44.503 EAL: Detected NUMA nodes: 2 00:03:44.503 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:44.503 EAL: Detected shared linkage of DPDK 00:03:44.503 EAL: No shared files mode enabled, IPC will be disabled 00:03:44.503 EAL: Bus pci wants IOVA as 'DC' 00:03:44.503 EAL: Buses did not request a specific IOVA mode. 00:03:44.503 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:44.503 EAL: Selected IOVA mode 'VA' 00:03:44.503 EAL: Probing VFIO support... 00:03:44.503 EAL: IOMMU type 1 (Type 1) is supported 00:03:44.503 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:44.503 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:44.503 EAL: VFIO support initialized 00:03:44.503 EAL: Ask a virtual area of 0x2e000 bytes 00:03:44.503 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:44.503 EAL: Setting up physically contiguous memory... 00:03:44.503 EAL: Setting maximum number of open files to 524288 00:03:44.503 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:44.503 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:44.503 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:44.503 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.503 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:44.503 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.503 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.503 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:44.503 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:44.503 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.503 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:44.503 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.503 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.503 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:44.503 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:44.503 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.503 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:44.503 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.503 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.503 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:44.503 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:44.503 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.503 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:44.503 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.503 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.503 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:44.503 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:44.503 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:44.503 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.503 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:44.503 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.503 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.503 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:44.503 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:44.503 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.503 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:44.503 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.503 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.503 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:44.503 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:44.503 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.503 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:44.503 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.503 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.503 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:44.503 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:44.503 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.503 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:44.503 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.503 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.503 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:44.503 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:44.503 EAL: Hugepages will be freed exactly as allocated. 00:03:44.503 EAL: No shared files mode enabled, IPC is disabled 00:03:44.503 EAL: No shared files mode enabled, IPC is disabled 00:03:44.503 EAL: TSC frequency is ~2400000 KHz 00:03:44.503 EAL: Main lcore 0 is ready (tid=7f3bc5dfda00;cpuset=[0]) 00:03:44.503 EAL: Trying to obtain current memory policy. 00:03:44.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.503 EAL: Restoring previous memory policy: 0 00:03:44.503 EAL: request: mp_malloc_sync 00:03:44.503 EAL: No shared files mode enabled, IPC is disabled 00:03:44.503 EAL: Heap on socket 0 was expanded by 2MB 00:03:44.503 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:44.765 EAL: Mem event callback 'spdk:(nil)' registered 00:03:44.765 00:03:44.765 00:03:44.765 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.765 http://cunit.sourceforge.net/ 00:03:44.765 00:03:44.765 00:03:44.765 Suite: components_suite 00:03:44.765 Test: vtophys_malloc_test ...passed 00:03:44.765 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:44.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.765 EAL: Restoring previous memory policy: 4 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was expanded by 4MB 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was shrunk by 4MB 00:03:44.765 EAL: Trying to obtain current memory policy. 00:03:44.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.765 EAL: Restoring previous memory policy: 4 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was expanded by 6MB 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was shrunk by 6MB 00:03:44.765 EAL: Trying to obtain current memory policy. 00:03:44.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.765 EAL: Restoring previous memory policy: 4 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was expanded by 10MB 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was shrunk by 10MB 00:03:44.765 EAL: Trying to obtain current memory policy. 00:03:44.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.765 EAL: Restoring previous memory policy: 4 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was expanded by 18MB 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was shrunk by 18MB 00:03:44.765 EAL: Trying to obtain current memory policy. 00:03:44.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.765 EAL: Restoring previous memory policy: 4 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was expanded by 34MB 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was shrunk by 34MB 00:03:44.765 EAL: Trying to obtain current memory policy. 00:03:44.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.765 EAL: Restoring previous memory policy: 4 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was expanded by 66MB 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was shrunk by 66MB 00:03:44.765 EAL: Trying to obtain current memory policy. 00:03:44.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.765 EAL: Restoring previous memory policy: 4 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was expanded by 130MB 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was shrunk by 130MB 00:03:44.765 EAL: Trying to obtain current memory policy. 00:03:44.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.765 EAL: Restoring previous memory policy: 4 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was expanded by 258MB 00:03:44.765 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.765 EAL: request: mp_malloc_sync 00:03:44.765 EAL: No shared files mode enabled, IPC is disabled 00:03:44.765 EAL: Heap on socket 0 was shrunk by 258MB 00:03:44.765 EAL: Trying to obtain current memory policy. 00:03:44.765 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.027 EAL: Restoring previous memory policy: 4 00:03:45.027 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.027 EAL: request: mp_malloc_sync 00:03:45.027 EAL: No shared files mode enabled, IPC is disabled 00:03:45.027 EAL: Heap on socket 0 was expanded by 514MB 00:03:45.027 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.027 EAL: request: mp_malloc_sync 00:03:45.027 EAL: No shared files mode enabled, IPC is disabled 00:03:45.027 EAL: Heap on socket 0 was shrunk by 514MB 00:03:45.027 EAL: Trying to obtain current memory policy. 00:03:45.027 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.288 EAL: Restoring previous memory policy: 4 00:03:45.288 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.288 EAL: request: mp_malloc_sync 00:03:45.288 EAL: No shared files mode enabled, IPC is disabled 00:03:45.288 EAL: Heap on socket 0 was expanded by 1026MB 00:03:45.288 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.288 EAL: request: mp_malloc_sync 00:03:45.288 EAL: No shared files mode enabled, IPC is disabled 00:03:45.288 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:45.288 passed 00:03:45.288 00:03:45.288 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.288 suites 1 1 n/a 0 0 00:03:45.288 tests 2 2 2 0 0 00:03:45.288 asserts 497 497 497 0 n/a 00:03:45.288 00:03:45.288 Elapsed time = 0.687 seconds 00:03:45.288 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.288 EAL: request: mp_malloc_sync 00:03:45.288 EAL: No shared files mode enabled, IPC is disabled 00:03:45.288 EAL: Heap on socket 0 was shrunk by 2MB 00:03:45.288 EAL: No shared files mode enabled, IPC is disabled 00:03:45.288 EAL: No shared files mode enabled, IPC is disabled 00:03:45.288 EAL: No shared files mode enabled, IPC is disabled 00:03:45.288 00:03:45.288 real 0m0.843s 00:03:45.288 user 0m0.442s 00:03:45.288 sys 0m0.369s 00:03:45.288 13:11:31 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.288 13:11:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:45.288 ************************************ 00:03:45.288 END TEST env_vtophys 00:03:45.288 ************************************ 00:03:45.549 13:11:31 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:45.549 13:11:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.549 13:11:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.549 13:11:31 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.549 ************************************ 00:03:45.549 START TEST env_pci 00:03:45.549 ************************************ 00:03:45.549 13:11:31 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:45.549 00:03:45.549 00:03:45.549 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.549 http://cunit.sourceforge.net/ 00:03:45.549 00:03:45.549 00:03:45.549 Suite: pci 00:03:45.549 Test: pci_hook ...[2024-12-06 13:11:32.017865] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1906367 has claimed it 00:03:45.549 EAL: Cannot find device (10000:00:01.0) 00:03:45.549 EAL: Failed to attach device on primary process 00:03:45.549 passed 00:03:45.549 00:03:45.549 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.549 suites 1 1 n/a 0 0 00:03:45.549 tests 1 1 1 0 0 00:03:45.549 asserts 25 25 25 0 n/a 00:03:45.549 00:03:45.549 Elapsed time = 0.037 seconds 00:03:45.549 00:03:45.549 real 0m0.059s 00:03:45.549 user 0m0.019s 00:03:45.549 sys 0m0.039s 00:03:45.549 13:11:32 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.549 13:11:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:45.549 ************************************ 00:03:45.549 END TEST env_pci 00:03:45.549 ************************************ 00:03:45.549 13:11:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:45.549 13:11:32 env -- env/env.sh@15 -- # uname 00:03:45.549 13:11:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:45.549 13:11:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:45.549 13:11:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.549 13:11:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:45.549 13:11:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.549 13:11:32 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.549 ************************************ 00:03:45.549 START TEST env_dpdk_post_init 00:03:45.549 ************************************ 00:03:45.549 13:11:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.549 EAL: Detected CPU lcores: 128 00:03:45.549 EAL: Detected NUMA nodes: 2 00:03:45.549 EAL: Detected shared linkage of DPDK 00:03:45.549 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.549 EAL: Selected IOVA mode 'VA' 00:03:45.549 EAL: VFIO support initialized 00:03:45.810 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.810 EAL: Using IOMMU type 1 (Type 1) 00:03:45.810 EAL: Ignore mapping IO port bar(1) 00:03:46.070 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:46.070 EAL: Ignore mapping IO port bar(1) 00:03:46.331 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:46.331 EAL: Ignore mapping IO port bar(1) 00:03:46.331 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:46.592 EAL: Ignore mapping IO port bar(1) 00:03:46.592 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:46.853 EAL: Ignore mapping IO port bar(1) 00:03:46.853 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:47.114 EAL: Ignore mapping IO port bar(1) 00:03:47.114 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:47.114 EAL: Ignore mapping IO port bar(1) 00:03:47.375 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:47.375 EAL: Ignore mapping IO port bar(1) 00:03:47.657 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:47.657 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:47.918 EAL: Ignore mapping IO port bar(1) 00:03:47.918 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:48.178 EAL: Ignore mapping IO port bar(1) 00:03:48.178 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:48.438 EAL: Ignore mapping IO port bar(1) 00:03:48.438 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:48.698 EAL: Ignore mapping IO port bar(1) 00:03:48.698 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:48.698 EAL: Ignore mapping IO port bar(1) 00:03:48.958 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:48.958 EAL: Ignore mapping IO port bar(1) 00:03:49.218 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:49.218 EAL: Ignore mapping IO port bar(1) 00:03:49.218 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:49.478 EAL: Ignore mapping IO port bar(1) 00:03:49.478 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:49.478 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:49.478 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:49.739 Starting DPDK initialization... 00:03:49.739 Starting SPDK post initialization... 00:03:49.739 SPDK NVMe probe 00:03:49.739 Attaching to 0000:65:00.0 00:03:49.739 Attached to 0000:65:00.0 00:03:49.739 Cleaning up... 00:03:51.649 00:03:51.649 real 0m5.745s 00:03:51.649 user 0m0.118s 00:03:51.649 sys 0m0.180s 00:03:51.649 13:11:37 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.649 13:11:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:51.649 ************************************ 00:03:51.649 END TEST env_dpdk_post_init 00:03:51.649 ************************************ 00:03:51.649 13:11:37 env -- env/env.sh@26 -- # uname 00:03:51.649 13:11:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:51.649 13:11:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:51.649 13:11:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.649 13:11:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.649 13:11:37 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.649 ************************************ 00:03:51.649 START TEST env_mem_callbacks 00:03:51.649 ************************************ 00:03:51.649 13:11:37 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:51.649 EAL: Detected CPU lcores: 128 00:03:51.649 EAL: Detected NUMA nodes: 2 00:03:51.649 EAL: Detected shared linkage of DPDK 00:03:51.649 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:51.649 EAL: Selected IOVA mode 'VA' 00:03:51.649 EAL: VFIO support initialized 00:03:51.649 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:51.649 00:03:51.649 00:03:51.649 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.649 http://cunit.sourceforge.net/ 00:03:51.649 00:03:51.649 00:03:51.649 Suite: memory 00:03:51.649 Test: test ... 00:03:51.649 register 0x200000200000 2097152 00:03:51.649 malloc 3145728 00:03:51.649 register 0x200000400000 4194304 00:03:51.649 buf 0x200000500000 len 3145728 PASSED 00:03:51.649 malloc 64 00:03:51.649 buf 0x2000004fff40 len 64 PASSED 00:03:51.649 malloc 4194304 00:03:51.649 register 0x200000800000 6291456 00:03:51.649 buf 0x200000a00000 len 4194304 PASSED 00:03:51.649 free 0x200000500000 3145728 00:03:51.649 free 0x2000004fff40 64 00:03:51.649 unregister 0x200000400000 4194304 PASSED 00:03:51.649 free 0x200000a00000 4194304 00:03:51.649 unregister 0x200000800000 6291456 PASSED 00:03:51.649 malloc 8388608 00:03:51.649 register 0x200000400000 10485760 00:03:51.649 buf 0x200000600000 len 8388608 PASSED 00:03:51.649 free 0x200000600000 8388608 00:03:51.649 unregister 0x200000400000 10485760 PASSED 00:03:51.649 passed 00:03:51.649 00:03:51.649 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.650 suites 1 1 n/a 0 0 00:03:51.650 tests 1 1 1 0 0 00:03:51.650 asserts 15 15 15 0 n/a 00:03:51.650 00:03:51.650 Elapsed time = 0.010 seconds 00:03:51.650 00:03:51.650 real 0m0.070s 00:03:51.650 user 0m0.021s 00:03:51.650 sys 0m0.050s 00:03:51.650 13:11:38 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.650 13:11:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:51.650 ************************************ 00:03:51.650 END TEST env_mem_callbacks 00:03:51.650 ************************************ 00:03:51.650 00:03:51.650 real 0m7.542s 00:03:51.650 user 0m1.073s 00:03:51.650 sys 0m1.026s 00:03:51.650 13:11:38 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.650 13:11:38 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.650 ************************************ 00:03:51.650 END TEST env 00:03:51.650 ************************************ 00:03:51.650 13:11:38 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:51.650 13:11:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.650 13:11:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.650 13:11:38 -- common/autotest_common.sh@10 -- # set +x 00:03:51.650 ************************************ 00:03:51.650 START TEST rpc 00:03:51.650 ************************************ 00:03:51.650 13:11:38 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:51.650 * Looking for test storage... 00:03:51.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.650 13:11:38 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:51.650 13:11:38 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:51.650 13:11:38 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:51.910 13:11:38 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:51.910 13:11:38 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.910 13:11:38 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.910 13:11:38 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.910 13:11:38 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.910 13:11:38 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.910 13:11:38 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.911 13:11:38 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.911 13:11:38 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.911 13:11:38 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.911 13:11:38 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.911 13:11:38 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.911 13:11:38 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:51.911 13:11:38 rpc -- scripts/common.sh@345 -- # : 1 00:03:51.911 13:11:38 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.911 13:11:38 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.911 13:11:38 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:51.911 13:11:38 rpc -- scripts/common.sh@353 -- # local d=1 00:03:51.911 13:11:38 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.911 13:11:38 rpc -- scripts/common.sh@355 -- # echo 1 00:03:51.911 13:11:38 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.911 13:11:38 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:51.911 13:11:38 rpc -- scripts/common.sh@353 -- # local d=2 00:03:51.911 13:11:38 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.911 13:11:38 rpc -- scripts/common.sh@355 -- # echo 2 00:03:51.911 13:11:38 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.911 13:11:38 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.911 13:11:38 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.911 13:11:38 rpc -- scripts/common.sh@368 -- # return 0 00:03:51.911 13:11:38 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.911 13:11:38 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:51.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.911 --rc genhtml_branch_coverage=1 00:03:51.911 --rc genhtml_function_coverage=1 00:03:51.911 --rc genhtml_legend=1 00:03:51.911 --rc geninfo_all_blocks=1 00:03:51.911 --rc geninfo_unexecuted_blocks=1 00:03:51.911 00:03:51.911 ' 00:03:51.911 13:11:38 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:51.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.911 --rc genhtml_branch_coverage=1 00:03:51.911 --rc genhtml_function_coverage=1 00:03:51.911 --rc genhtml_legend=1 00:03:51.911 --rc geninfo_all_blocks=1 00:03:51.911 --rc geninfo_unexecuted_blocks=1 00:03:51.911 00:03:51.911 ' 00:03:51.911 13:11:38 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:51.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.911 --rc genhtml_branch_coverage=1 00:03:51.911 --rc genhtml_function_coverage=1 00:03:51.911 --rc genhtml_legend=1 00:03:51.911 --rc geninfo_all_blocks=1 00:03:51.911 --rc geninfo_unexecuted_blocks=1 00:03:51.911 00:03:51.911 ' 00:03:51.911 13:11:38 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:51.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.911 --rc genhtml_branch_coverage=1 00:03:51.911 --rc genhtml_function_coverage=1 00:03:51.911 --rc genhtml_legend=1 00:03:51.911 --rc geninfo_all_blocks=1 00:03:51.911 --rc geninfo_unexecuted_blocks=1 00:03:51.911 00:03:51.911 ' 00:03:51.911 13:11:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1907750 00:03:51.911 13:11:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:51.911 13:11:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1907750 00:03:51.911 13:11:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:51.911 13:11:38 rpc -- common/autotest_common.sh@835 -- # '[' -z 1907750 ']' 00:03:51.911 13:11:38 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.911 13:11:38 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:51.911 13:11:38 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.911 13:11:38 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:51.911 13:11:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.911 [2024-12-06 13:11:38.437629] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:03:51.911 [2024-12-06 13:11:38.437700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1907750 ] 00:03:51.911 [2024-12-06 13:11:38.527656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.171 [2024-12-06 13:11:38.580024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:52.171 [2024-12-06 13:11:38.580076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1907750' to capture a snapshot of events at runtime. 00:03:52.171 [2024-12-06 13:11:38.580085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:52.171 [2024-12-06 13:11:38.580092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:52.171 [2024-12-06 13:11:38.580099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1907750 for offline analysis/debug. 00:03:52.171 [2024-12-06 13:11:38.580915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.743 13:11:39 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:52.743 13:11:39 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:52.743 13:11:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:52.743 13:11:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:52.743 13:11:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:52.743 13:11:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:52.743 13:11:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.743 13:11:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.743 13:11:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.743 ************************************ 00:03:52.743 START TEST rpc_integrity 00:03:52.743 ************************************ 00:03:52.743 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:52.743 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:52.743 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.743 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.743 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.743 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:52.743 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:52.743 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:52.743 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:52.743 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.743 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.743 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.743 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:52.743 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:52.743 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.743 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.743 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.743 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:52.743 { 00:03:52.743 "name": "Malloc0", 00:03:52.743 "aliases": [ 00:03:52.743 "b4244d8a-85b0-47c1-a0e1-f6a56e2ccaa4" 00:03:52.743 ], 00:03:52.743 "product_name": "Malloc disk", 00:03:52.743 "block_size": 512, 00:03:52.743 "num_blocks": 16384, 00:03:52.743 "uuid": "b4244d8a-85b0-47c1-a0e1-f6a56e2ccaa4", 00:03:52.743 "assigned_rate_limits": { 00:03:52.743 "rw_ios_per_sec": 0, 00:03:52.743 "rw_mbytes_per_sec": 0, 00:03:52.743 "r_mbytes_per_sec": 0, 00:03:52.743 "w_mbytes_per_sec": 0 00:03:52.743 }, 00:03:52.743 "claimed": false, 00:03:52.743 "zoned": false, 00:03:52.743 "supported_io_types": { 00:03:52.743 "read": true, 00:03:52.743 "write": true, 00:03:52.743 "unmap": true, 00:03:52.743 "flush": true, 00:03:52.743 "reset": true, 00:03:52.743 "nvme_admin": false, 00:03:52.743 "nvme_io": false, 00:03:52.743 "nvme_io_md": false, 00:03:52.743 "write_zeroes": true, 00:03:52.743 "zcopy": true, 00:03:52.743 "get_zone_info": false, 00:03:52.743 "zone_management": false, 00:03:52.743 "zone_append": false, 00:03:52.743 "compare": false, 00:03:52.743 "compare_and_write": false, 00:03:52.743 "abort": true, 00:03:52.743 "seek_hole": false, 00:03:52.743 "seek_data": false, 00:03:52.743 "copy": true, 00:03:52.743 "nvme_iov_md": false 00:03:52.743 }, 00:03:52.743 "memory_domains": [ 00:03:52.743 { 00:03:52.743 "dma_device_id": "system", 00:03:52.743 "dma_device_type": 1 00:03:52.743 }, 00:03:52.743 { 00:03:52.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.743 "dma_device_type": 2 00:03:52.743 } 00:03:52.743 ], 00:03:52.743 "driver_specific": {} 00:03:52.743 } 00:03:52.743 ]' 00:03:52.743 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:53.005 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:53.005 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.005 [2024-12-06 13:11:39.422827] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:53.005 [2024-12-06 13:11:39.422873] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:53.005 [2024-12-06 13:11:39.422890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x150ef80 00:03:53.005 [2024-12-06 13:11:39.422898] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:53.005 [2024-12-06 13:11:39.424496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:53.005 [2024-12-06 13:11:39.424533] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:53.005 Passthru0 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.005 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.005 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:53.005 { 00:03:53.005 "name": "Malloc0", 00:03:53.005 "aliases": [ 00:03:53.005 "b4244d8a-85b0-47c1-a0e1-f6a56e2ccaa4" 00:03:53.005 ], 00:03:53.005 "product_name": "Malloc disk", 00:03:53.005 "block_size": 512, 00:03:53.005 "num_blocks": 16384, 00:03:53.005 "uuid": "b4244d8a-85b0-47c1-a0e1-f6a56e2ccaa4", 00:03:53.005 "assigned_rate_limits": { 00:03:53.005 "rw_ios_per_sec": 0, 00:03:53.005 "rw_mbytes_per_sec": 0, 00:03:53.005 "r_mbytes_per_sec": 0, 00:03:53.005 "w_mbytes_per_sec": 0 00:03:53.005 }, 00:03:53.005 "claimed": true, 00:03:53.005 "claim_type": "exclusive_write", 00:03:53.005 "zoned": false, 00:03:53.005 "supported_io_types": { 00:03:53.005 "read": true, 00:03:53.005 "write": true, 00:03:53.005 "unmap": true, 00:03:53.005 "flush": true, 00:03:53.005 "reset": true, 00:03:53.005 "nvme_admin": false, 00:03:53.005 "nvme_io": false, 00:03:53.005 "nvme_io_md": false, 00:03:53.005 "write_zeroes": true, 00:03:53.005 "zcopy": true, 00:03:53.005 "get_zone_info": false, 00:03:53.005 "zone_management": false, 00:03:53.005 "zone_append": false, 00:03:53.005 "compare": false, 00:03:53.005 "compare_and_write": false, 00:03:53.005 "abort": true, 00:03:53.005 "seek_hole": false, 00:03:53.005 "seek_data": false, 00:03:53.005 "copy": true, 00:03:53.005 "nvme_iov_md": false 00:03:53.005 }, 00:03:53.005 "memory_domains": [ 00:03:53.005 { 00:03:53.005 "dma_device_id": "system", 00:03:53.005 "dma_device_type": 1 00:03:53.005 }, 00:03:53.005 { 00:03:53.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.005 "dma_device_type": 2 00:03:53.005 } 00:03:53.005 ], 00:03:53.005 "driver_specific": {} 00:03:53.005 }, 00:03:53.005 { 00:03:53.005 "name": "Passthru0", 00:03:53.005 "aliases": [ 00:03:53.005 "07da6c28-4e0d-5d64-bf8e-64573743f579" 00:03:53.005 ], 00:03:53.005 "product_name": "passthru", 00:03:53.005 "block_size": 512, 00:03:53.005 "num_blocks": 16384, 00:03:53.005 "uuid": "07da6c28-4e0d-5d64-bf8e-64573743f579", 00:03:53.005 "assigned_rate_limits": { 00:03:53.005 "rw_ios_per_sec": 0, 00:03:53.005 "rw_mbytes_per_sec": 0, 00:03:53.005 "r_mbytes_per_sec": 0, 00:03:53.005 "w_mbytes_per_sec": 0 00:03:53.005 }, 00:03:53.005 "claimed": false, 00:03:53.005 "zoned": false, 00:03:53.005 "supported_io_types": { 00:03:53.005 "read": true, 00:03:53.005 "write": true, 00:03:53.005 "unmap": true, 00:03:53.005 "flush": true, 00:03:53.005 "reset": true, 00:03:53.005 "nvme_admin": false, 00:03:53.005 "nvme_io": false, 00:03:53.005 "nvme_io_md": false, 00:03:53.005 "write_zeroes": true, 00:03:53.005 "zcopy": true, 00:03:53.005 "get_zone_info": false, 00:03:53.005 "zone_management": false, 00:03:53.005 "zone_append": false, 00:03:53.005 "compare": false, 00:03:53.005 "compare_and_write": false, 00:03:53.005 "abort": true, 00:03:53.005 "seek_hole": false, 00:03:53.005 "seek_data": false, 00:03:53.005 "copy": true, 00:03:53.005 "nvme_iov_md": false 00:03:53.005 }, 00:03:53.005 "memory_domains": [ 00:03:53.005 { 00:03:53.005 "dma_device_id": "system", 00:03:53.005 "dma_device_type": 1 00:03:53.005 }, 00:03:53.005 { 00:03:53.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.005 "dma_device_type": 2 00:03:53.005 } 00:03:53.005 ], 00:03:53.005 "driver_specific": { 00:03:53.005 "passthru": { 00:03:53.005 "name": "Passthru0", 00:03:53.005 "base_bdev_name": "Malloc0" 00:03:53.005 } 00:03:53.005 } 00:03:53.005 } 00:03:53.005 ]' 00:03:53.005 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:53.005 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:53.005 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.005 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.005 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.005 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:53.005 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:53.005 13:11:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:53.005 00:03:53.005 real 0m0.304s 00:03:53.005 user 0m0.181s 00:03:53.005 sys 0m0.053s 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.005 13:11:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.005 ************************************ 00:03:53.005 END TEST rpc_integrity 00:03:53.005 ************************************ 00:03:53.005 13:11:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:53.005 13:11:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.005 13:11:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.005 13:11:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.266 ************************************ 00:03:53.266 START TEST rpc_plugins 00:03:53.266 ************************************ 00:03:53.266 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:53.266 13:11:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:53.266 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.266 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.266 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.266 13:11:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:53.266 13:11:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:53.266 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.266 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.266 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.266 13:11:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:53.266 { 00:03:53.267 "name": "Malloc1", 00:03:53.267 "aliases": [ 00:03:53.267 "654825f1-6256-4283-a7c1-12af87cc7b57" 00:03:53.267 ], 00:03:53.267 "product_name": "Malloc disk", 00:03:53.267 "block_size": 4096, 00:03:53.267 "num_blocks": 256, 00:03:53.267 "uuid": "654825f1-6256-4283-a7c1-12af87cc7b57", 00:03:53.267 "assigned_rate_limits": { 00:03:53.267 "rw_ios_per_sec": 0, 00:03:53.267 "rw_mbytes_per_sec": 0, 00:03:53.267 "r_mbytes_per_sec": 0, 00:03:53.267 "w_mbytes_per_sec": 0 00:03:53.267 }, 00:03:53.267 "claimed": false, 00:03:53.267 "zoned": false, 00:03:53.267 "supported_io_types": { 00:03:53.267 "read": true, 00:03:53.267 "write": true, 00:03:53.267 "unmap": true, 00:03:53.267 "flush": true, 00:03:53.267 "reset": true, 00:03:53.267 "nvme_admin": false, 00:03:53.267 "nvme_io": false, 00:03:53.267 "nvme_io_md": false, 00:03:53.267 "write_zeroes": true, 00:03:53.267 "zcopy": true, 00:03:53.267 "get_zone_info": false, 00:03:53.267 "zone_management": false, 00:03:53.267 "zone_append": false, 00:03:53.267 "compare": false, 00:03:53.267 "compare_and_write": false, 00:03:53.267 "abort": true, 00:03:53.267 "seek_hole": false, 00:03:53.267 "seek_data": false, 00:03:53.267 "copy": true, 00:03:53.267 "nvme_iov_md": false 00:03:53.267 }, 00:03:53.267 "memory_domains": [ 00:03:53.267 { 00:03:53.267 "dma_device_id": "system", 00:03:53.267 "dma_device_type": 1 00:03:53.267 }, 00:03:53.267 { 00:03:53.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.267 "dma_device_type": 2 00:03:53.267 } 00:03:53.267 ], 00:03:53.267 "driver_specific": {} 00:03:53.267 } 00:03:53.267 ]' 00:03:53.267 13:11:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:53.267 13:11:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:53.267 13:11:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:53.267 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.267 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.267 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.267 13:11:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:53.267 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.267 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.267 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.267 13:11:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:53.267 13:11:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:53.267 13:11:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:53.267 00:03:53.267 real 0m0.161s 00:03:53.267 user 0m0.101s 00:03:53.267 sys 0m0.020s 00:03:53.267 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.267 13:11:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.267 ************************************ 00:03:53.267 END TEST rpc_plugins 00:03:53.267 ************************************ 00:03:53.267 13:11:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:53.267 13:11:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.267 13:11:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.267 13:11:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.267 ************************************ 00:03:53.267 START TEST rpc_trace_cmd_test 00:03:53.267 ************************************ 00:03:53.267 13:11:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:53.267 13:11:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:53.267 13:11:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:53.267 13:11:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.267 13:11:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:53.528 13:11:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.528 13:11:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:53.528 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1907750", 00:03:53.528 "tpoint_group_mask": "0x8", 00:03:53.528 "iscsi_conn": { 00:03:53.528 "mask": "0x2", 00:03:53.528 "tpoint_mask": "0x0" 00:03:53.528 }, 00:03:53.528 "scsi": { 00:03:53.528 "mask": "0x4", 00:03:53.528 "tpoint_mask": "0x0" 00:03:53.528 }, 00:03:53.528 "bdev": { 00:03:53.528 "mask": "0x8", 00:03:53.528 "tpoint_mask": "0xffffffffffffffff" 00:03:53.528 }, 00:03:53.528 "nvmf_rdma": { 00:03:53.528 "mask": "0x10", 00:03:53.528 "tpoint_mask": "0x0" 00:03:53.528 }, 00:03:53.528 "nvmf_tcp": { 00:03:53.528 "mask": "0x20", 00:03:53.528 "tpoint_mask": "0x0" 00:03:53.528 }, 00:03:53.528 "ftl": { 00:03:53.528 "mask": "0x40", 00:03:53.528 "tpoint_mask": "0x0" 00:03:53.528 }, 00:03:53.528 "blobfs": { 00:03:53.528 "mask": "0x80", 00:03:53.528 "tpoint_mask": "0x0" 00:03:53.528 }, 00:03:53.528 "dsa": { 00:03:53.528 "mask": "0x200", 00:03:53.528 "tpoint_mask": "0x0" 00:03:53.528 }, 00:03:53.528 "thread": { 00:03:53.528 "mask": "0x400", 00:03:53.528 "tpoint_mask": "0x0" 00:03:53.528 }, 00:03:53.528 "nvme_pcie": { 00:03:53.528 "mask": "0x800", 00:03:53.528 "tpoint_mask": "0x0" 00:03:53.528 }, 00:03:53.528 "iaa": { 00:03:53.528 "mask": "0x1000", 00:03:53.528 "tpoint_mask": "0x0" 00:03:53.528 }, 00:03:53.528 "nvme_tcp": { 00:03:53.528 "mask": "0x2000", 00:03:53.528 "tpoint_mask": "0x0" 00:03:53.528 }, 00:03:53.528 "bdev_nvme": { 00:03:53.528 "mask": "0x4000", 00:03:53.528 "tpoint_mask": "0x0" 00:03:53.528 }, 00:03:53.528 "sock": { 00:03:53.528 "mask": "0x8000", 00:03:53.528 "tpoint_mask": "0x0" 00:03:53.528 }, 00:03:53.529 "blob": { 00:03:53.529 "mask": "0x10000", 00:03:53.529 "tpoint_mask": "0x0" 00:03:53.529 }, 00:03:53.529 "bdev_raid": { 00:03:53.529 "mask": "0x20000", 00:03:53.529 "tpoint_mask": "0x0" 00:03:53.529 }, 00:03:53.529 "scheduler": { 00:03:53.529 "mask": "0x40000", 00:03:53.529 "tpoint_mask": "0x0" 00:03:53.529 } 00:03:53.529 }' 00:03:53.529 13:11:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:53.529 13:11:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:53.529 13:11:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:53.529 13:11:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:53.529 13:11:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:53.529 13:11:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:53.529 13:11:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:53.529 13:11:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:53.529 13:11:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:53.529 13:11:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:53.529 00:03:53.529 real 0m0.256s 00:03:53.529 user 0m0.211s 00:03:53.529 sys 0m0.034s 00:03:53.529 13:11:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.529 13:11:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:53.529 ************************************ 00:03:53.529 END TEST rpc_trace_cmd_test 00:03:53.529 ************************************ 00:03:53.790 13:11:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:53.790 13:11:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:53.790 13:11:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:53.790 13:11:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.790 13:11:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.790 13:11:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.790 ************************************ 00:03:53.790 START TEST rpc_daemon_integrity 00:03:53.790 ************************************ 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.790 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:53.790 { 00:03:53.790 "name": "Malloc2", 00:03:53.790 "aliases": [ 00:03:53.790 "3157898a-c1c2-401f-ba78-22d085c1ddb5" 00:03:53.790 ], 00:03:53.790 "product_name": "Malloc disk", 00:03:53.790 "block_size": 512, 00:03:53.791 "num_blocks": 16384, 00:03:53.791 "uuid": "3157898a-c1c2-401f-ba78-22d085c1ddb5", 00:03:53.791 "assigned_rate_limits": { 00:03:53.791 "rw_ios_per_sec": 0, 00:03:53.791 "rw_mbytes_per_sec": 0, 00:03:53.791 "r_mbytes_per_sec": 0, 00:03:53.791 "w_mbytes_per_sec": 0 00:03:53.791 }, 00:03:53.791 "claimed": false, 00:03:53.791 "zoned": false, 00:03:53.791 "supported_io_types": { 00:03:53.791 "read": true, 00:03:53.791 "write": true, 00:03:53.791 "unmap": true, 00:03:53.791 "flush": true, 00:03:53.791 "reset": true, 00:03:53.791 "nvme_admin": false, 00:03:53.791 "nvme_io": false, 00:03:53.791 "nvme_io_md": false, 00:03:53.791 "write_zeroes": true, 00:03:53.791 "zcopy": true, 00:03:53.791 "get_zone_info": false, 00:03:53.791 "zone_management": false, 00:03:53.791 "zone_append": false, 00:03:53.791 "compare": false, 00:03:53.791 "compare_and_write": false, 00:03:53.791 "abort": true, 00:03:53.791 "seek_hole": false, 00:03:53.791 "seek_data": false, 00:03:53.791 "copy": true, 00:03:53.791 "nvme_iov_md": false 00:03:53.791 }, 00:03:53.791 "memory_domains": [ 00:03:53.791 { 00:03:53.791 "dma_device_id": "system", 00:03:53.791 "dma_device_type": 1 00:03:53.791 }, 00:03:53.791 { 00:03:53.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.791 "dma_device_type": 2 00:03:53.791 } 00:03:53.791 ], 00:03:53.791 "driver_specific": {} 00:03:53.791 } 00:03:53.791 ]' 00:03:53.791 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:53.791 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:53.791 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:53.791 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.791 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.791 [2024-12-06 13:11:40.397542] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:53.791 [2024-12-06 13:11:40.397590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:53.791 [2024-12-06 13:11:40.397610] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x150ec40 00:03:53.791 [2024-12-06 13:11:40.397618] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:53.791 [2024-12-06 13:11:40.399137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:53.791 [2024-12-06 13:11:40.399175] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:53.791 Passthru0 00:03:53.791 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.791 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:53.791 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.791 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.791 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.791 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:53.791 { 00:03:53.791 "name": "Malloc2", 00:03:53.791 "aliases": [ 00:03:53.791 "3157898a-c1c2-401f-ba78-22d085c1ddb5" 00:03:53.791 ], 00:03:53.791 "product_name": "Malloc disk", 00:03:53.791 "block_size": 512, 00:03:53.791 "num_blocks": 16384, 00:03:53.791 "uuid": "3157898a-c1c2-401f-ba78-22d085c1ddb5", 00:03:53.791 "assigned_rate_limits": { 00:03:53.791 "rw_ios_per_sec": 0, 00:03:53.791 "rw_mbytes_per_sec": 0, 00:03:53.791 "r_mbytes_per_sec": 0, 00:03:53.791 "w_mbytes_per_sec": 0 00:03:53.791 }, 00:03:53.791 "claimed": true, 00:03:53.791 "claim_type": "exclusive_write", 00:03:53.791 "zoned": false, 00:03:53.791 "supported_io_types": { 00:03:53.791 "read": true, 00:03:53.791 "write": true, 00:03:53.791 "unmap": true, 00:03:53.791 "flush": true, 00:03:53.791 "reset": true, 00:03:53.791 "nvme_admin": false, 00:03:53.791 "nvme_io": false, 00:03:53.791 "nvme_io_md": false, 00:03:53.791 "write_zeroes": true, 00:03:53.791 "zcopy": true, 00:03:53.791 "get_zone_info": false, 00:03:53.791 "zone_management": false, 00:03:53.791 "zone_append": false, 00:03:53.791 "compare": false, 00:03:53.791 "compare_and_write": false, 00:03:53.791 "abort": true, 00:03:53.791 "seek_hole": false, 00:03:53.791 "seek_data": false, 00:03:53.791 "copy": true, 00:03:53.791 "nvme_iov_md": false 00:03:53.791 }, 00:03:53.791 "memory_domains": [ 00:03:53.791 { 00:03:53.791 "dma_device_id": "system", 00:03:53.791 "dma_device_type": 1 00:03:53.791 }, 00:03:53.791 { 00:03:53.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.791 "dma_device_type": 2 00:03:53.791 } 00:03:53.791 ], 00:03:53.791 "driver_specific": {} 00:03:53.791 }, 00:03:53.791 { 00:03:53.791 "name": "Passthru0", 00:03:53.791 "aliases": [ 00:03:53.791 "9530345d-13d9-5f2d-9ae4-bc26994be27d" 00:03:53.791 ], 00:03:53.791 "product_name": "passthru", 00:03:53.791 "block_size": 512, 00:03:53.791 "num_blocks": 16384, 00:03:53.791 "uuid": "9530345d-13d9-5f2d-9ae4-bc26994be27d", 00:03:53.791 "assigned_rate_limits": { 00:03:53.791 "rw_ios_per_sec": 0, 00:03:53.791 "rw_mbytes_per_sec": 0, 00:03:53.791 "r_mbytes_per_sec": 0, 00:03:53.791 "w_mbytes_per_sec": 0 00:03:53.791 }, 00:03:53.791 "claimed": false, 00:03:53.791 "zoned": false, 00:03:53.791 "supported_io_types": { 00:03:53.791 "read": true, 00:03:53.791 "write": true, 00:03:53.791 "unmap": true, 00:03:53.791 "flush": true, 00:03:53.791 "reset": true, 00:03:53.791 "nvme_admin": false, 00:03:53.791 "nvme_io": false, 00:03:53.791 "nvme_io_md": false, 00:03:53.791 "write_zeroes": true, 00:03:53.791 "zcopy": true, 00:03:53.791 "get_zone_info": false, 00:03:53.791 "zone_management": false, 00:03:53.791 "zone_append": false, 00:03:53.791 "compare": false, 00:03:53.791 "compare_and_write": false, 00:03:53.791 "abort": true, 00:03:53.791 "seek_hole": false, 00:03:53.791 "seek_data": false, 00:03:53.791 "copy": true, 00:03:53.791 "nvme_iov_md": false 00:03:53.791 }, 00:03:53.791 "memory_domains": [ 00:03:53.791 { 00:03:53.791 "dma_device_id": "system", 00:03:53.791 "dma_device_type": 1 00:03:53.791 }, 00:03:53.791 { 00:03:53.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.791 "dma_device_type": 2 00:03:53.791 } 00:03:53.791 ], 00:03:53.791 "driver_specific": { 00:03:53.791 "passthru": { 00:03:53.791 "name": "Passthru0", 00:03:53.791 "base_bdev_name": "Malloc2" 00:03:53.791 } 00:03:53.791 } 00:03:53.791 } 00:03:53.791 ]' 00:03:53.791 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:54.052 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:54.052 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:54.052 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.052 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.052 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.052 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:54.053 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.053 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.053 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.053 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:54.053 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:54.053 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.053 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:54.053 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:54.053 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:54.053 13:11:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:54.053 00:03:54.053 real 0m0.305s 00:03:54.053 user 0m0.195s 00:03:54.053 sys 0m0.046s 00:03:54.053 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.053 13:11:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:54.053 ************************************ 00:03:54.053 END TEST rpc_daemon_integrity 00:03:54.053 ************************************ 00:03:54.053 13:11:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:54.053 13:11:40 rpc -- rpc/rpc.sh@84 -- # killprocess 1907750 00:03:54.053 13:11:40 rpc -- common/autotest_common.sh@954 -- # '[' -z 1907750 ']' 00:03:54.053 13:11:40 rpc -- common/autotest_common.sh@958 -- # kill -0 1907750 00:03:54.053 13:11:40 rpc -- common/autotest_common.sh@959 -- # uname 00:03:54.053 13:11:40 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:54.053 13:11:40 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1907750 00:03:54.053 13:11:40 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:54.053 13:11:40 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:54.053 13:11:40 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1907750' 00:03:54.053 killing process with pid 1907750 00:03:54.053 13:11:40 rpc -- common/autotest_common.sh@973 -- # kill 1907750 00:03:54.053 13:11:40 rpc -- common/autotest_common.sh@978 -- # wait 1907750 00:03:54.314 00:03:54.314 real 0m2.733s 00:03:54.314 user 0m3.446s 00:03:54.314 sys 0m0.893s 00:03:54.314 13:11:40 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.314 13:11:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.314 ************************************ 00:03:54.314 END TEST rpc 00:03:54.314 ************************************ 00:03:54.314 13:11:40 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:54.314 13:11:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.314 13:11:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.314 13:11:40 -- common/autotest_common.sh@10 -- # set +x 00:03:54.575 ************************************ 00:03:54.575 START TEST skip_rpc 00:03:54.575 ************************************ 00:03:54.575 13:11:40 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:54.575 * Looking for test storage... 00:03:54.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:54.575 13:11:41 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:54.575 13:11:41 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:54.575 13:11:41 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:54.575 13:11:41 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.575 13:11:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:54.576 13:11:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.576 13:11:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.576 13:11:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.576 13:11:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:54.576 13:11:41 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.576 13:11:41 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:54.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.576 --rc genhtml_branch_coverage=1 00:03:54.576 --rc genhtml_function_coverage=1 00:03:54.576 --rc genhtml_legend=1 00:03:54.576 --rc geninfo_all_blocks=1 00:03:54.576 --rc geninfo_unexecuted_blocks=1 00:03:54.576 00:03:54.576 ' 00:03:54.576 13:11:41 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:54.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.576 --rc genhtml_branch_coverage=1 00:03:54.576 --rc genhtml_function_coverage=1 00:03:54.576 --rc genhtml_legend=1 00:03:54.576 --rc geninfo_all_blocks=1 00:03:54.576 --rc geninfo_unexecuted_blocks=1 00:03:54.576 00:03:54.576 ' 00:03:54.576 13:11:41 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:54.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.576 --rc genhtml_branch_coverage=1 00:03:54.576 --rc genhtml_function_coverage=1 00:03:54.576 --rc genhtml_legend=1 00:03:54.576 --rc geninfo_all_blocks=1 00:03:54.576 --rc geninfo_unexecuted_blocks=1 00:03:54.576 00:03:54.576 ' 00:03:54.576 13:11:41 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:54.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.576 --rc genhtml_branch_coverage=1 00:03:54.576 --rc genhtml_function_coverage=1 00:03:54.576 --rc genhtml_legend=1 00:03:54.576 --rc geninfo_all_blocks=1 00:03:54.576 --rc geninfo_unexecuted_blocks=1 00:03:54.576 00:03:54.576 ' 00:03:54.576 13:11:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:54.576 13:11:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:54.576 13:11:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:54.576 13:11:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.576 13:11:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.576 13:11:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.576 ************************************ 00:03:54.576 START TEST skip_rpc 00:03:54.576 ************************************ 00:03:54.576 13:11:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:54.837 13:11:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1908357 00:03:54.837 13:11:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:54.837 13:11:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:54.837 13:11:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:54.837 [2024-12-06 13:11:41.291738] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:03:54.837 [2024-12-06 13:11:41.291798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908357 ] 00:03:54.837 [2024-12-06 13:11:41.387063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.837 [2024-12-06 13:11:41.441059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1908357 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1908357 ']' 00:04:00.126 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1908357 00:04:00.127 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:00.127 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.127 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1908357 00:04:00.127 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.127 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.127 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1908357' 00:04:00.127 killing process with pid 1908357 00:04:00.127 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1908357 00:04:00.127 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1908357 00:04:00.127 00:04:00.127 real 0m5.266s 00:04:00.127 user 0m5.016s 00:04:00.127 sys 0m0.295s 00:04:00.127 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.127 13:11:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.127 ************************************ 00:04:00.127 END TEST skip_rpc 00:04:00.127 ************************************ 00:04:00.127 13:11:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:00.127 13:11:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.127 13:11:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.127 13:11:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.127 ************************************ 00:04:00.127 START TEST skip_rpc_with_json 00:04:00.127 ************************************ 00:04:00.127 13:11:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:00.127 13:11:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:00.127 13:11:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1909466 00:04:00.127 13:11:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.127 13:11:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1909466 00:04:00.127 13:11:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:00.127 13:11:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1909466 ']' 00:04:00.127 13:11:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.127 13:11:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.127 13:11:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.127 13:11:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.127 13:11:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:00.127 [2024-12-06 13:11:46.635589] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:00.127 [2024-12-06 13:11:46.635649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909466 ] 00:04:00.127 [2024-12-06 13:11:46.719422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.127 [2024-12-06 13:11:46.751868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:01.069 [2024-12-06 13:11:47.420331] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:01.069 request: 00:04:01.069 { 00:04:01.069 "trtype": "tcp", 00:04:01.069 "method": "nvmf_get_transports", 00:04:01.069 "req_id": 1 00:04:01.069 } 00:04:01.069 Got JSON-RPC error response 00:04:01.069 response: 00:04:01.069 { 00:04:01.069 "code": -19, 00:04:01.069 "message": "No such device" 00:04:01.069 } 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:01.069 [2024-12-06 13:11:47.432434] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.069 13:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:01.069 { 00:04:01.069 "subsystems": [ 00:04:01.069 { 00:04:01.069 "subsystem": "fsdev", 00:04:01.069 "config": [ 00:04:01.069 { 00:04:01.069 "method": "fsdev_set_opts", 00:04:01.069 "params": { 00:04:01.069 "fsdev_io_pool_size": 65535, 00:04:01.069 "fsdev_io_cache_size": 256 00:04:01.069 } 00:04:01.069 } 00:04:01.069 ] 00:04:01.069 }, 00:04:01.069 { 00:04:01.069 "subsystem": "vfio_user_target", 00:04:01.069 "config": null 00:04:01.069 }, 00:04:01.069 { 00:04:01.069 "subsystem": "keyring", 00:04:01.069 "config": [] 00:04:01.069 }, 00:04:01.069 { 00:04:01.069 "subsystem": "iobuf", 00:04:01.069 "config": [ 00:04:01.069 { 00:04:01.069 "method": "iobuf_set_options", 00:04:01.069 "params": { 00:04:01.069 "small_pool_count": 8192, 00:04:01.069 "large_pool_count": 1024, 00:04:01.069 "small_bufsize": 8192, 00:04:01.069 "large_bufsize": 135168, 00:04:01.069 "enable_numa": false 00:04:01.069 } 00:04:01.069 } 00:04:01.069 ] 00:04:01.069 }, 00:04:01.069 { 00:04:01.069 "subsystem": "sock", 00:04:01.069 "config": [ 00:04:01.069 { 00:04:01.069 "method": "sock_set_default_impl", 00:04:01.069 "params": { 00:04:01.069 "impl_name": "posix" 00:04:01.069 } 00:04:01.069 }, 00:04:01.069 { 00:04:01.069 "method": "sock_impl_set_options", 00:04:01.069 "params": { 00:04:01.069 "impl_name": "ssl", 00:04:01.069 "recv_buf_size": 4096, 00:04:01.069 "send_buf_size": 4096, 00:04:01.069 "enable_recv_pipe": true, 00:04:01.069 "enable_quickack": false, 00:04:01.069 "enable_placement_id": 0, 00:04:01.069 "enable_zerocopy_send_server": true, 00:04:01.069 "enable_zerocopy_send_client": false, 00:04:01.069 "zerocopy_threshold": 0, 00:04:01.069 "tls_version": 0, 00:04:01.069 "enable_ktls": false 00:04:01.069 } 00:04:01.069 }, 00:04:01.069 { 00:04:01.069 "method": "sock_impl_set_options", 00:04:01.069 "params": { 00:04:01.069 "impl_name": "posix", 00:04:01.069 "recv_buf_size": 2097152, 00:04:01.069 "send_buf_size": 2097152, 00:04:01.069 "enable_recv_pipe": true, 00:04:01.069 "enable_quickack": false, 00:04:01.069 "enable_placement_id": 0, 00:04:01.069 "enable_zerocopy_send_server": true, 00:04:01.069 "enable_zerocopy_send_client": false, 00:04:01.069 "zerocopy_threshold": 0, 00:04:01.069 "tls_version": 0, 00:04:01.069 "enable_ktls": false 00:04:01.069 } 00:04:01.069 } 00:04:01.069 ] 00:04:01.069 }, 00:04:01.069 { 00:04:01.069 "subsystem": "vmd", 00:04:01.069 "config": [] 00:04:01.069 }, 00:04:01.069 { 00:04:01.069 "subsystem": "accel", 00:04:01.069 "config": [ 00:04:01.069 { 00:04:01.069 "method": "accel_set_options", 00:04:01.069 "params": { 00:04:01.069 "small_cache_size": 128, 00:04:01.069 "large_cache_size": 16, 00:04:01.069 "task_count": 2048, 00:04:01.069 "sequence_count": 2048, 00:04:01.069 "buf_count": 2048 00:04:01.069 } 00:04:01.069 } 00:04:01.069 ] 00:04:01.069 }, 00:04:01.069 { 00:04:01.069 "subsystem": "bdev", 00:04:01.069 "config": [ 00:04:01.069 { 00:04:01.069 "method": "bdev_set_options", 00:04:01.069 "params": { 00:04:01.069 "bdev_io_pool_size": 65535, 00:04:01.069 "bdev_io_cache_size": 256, 00:04:01.069 "bdev_auto_examine": true, 00:04:01.069 "iobuf_small_cache_size": 128, 00:04:01.069 "iobuf_large_cache_size": 16 00:04:01.069 } 00:04:01.069 }, 00:04:01.069 { 00:04:01.069 "method": "bdev_raid_set_options", 00:04:01.069 "params": { 00:04:01.069 "process_window_size_kb": 1024, 00:04:01.069 "process_max_bandwidth_mb_sec": 0 00:04:01.069 } 00:04:01.069 }, 00:04:01.069 { 00:04:01.069 "method": "bdev_iscsi_set_options", 00:04:01.069 "params": { 00:04:01.069 "timeout_sec": 30 00:04:01.069 } 00:04:01.069 }, 00:04:01.069 { 00:04:01.069 "method": "bdev_nvme_set_options", 00:04:01.069 "params": { 00:04:01.069 "action_on_timeout": "none", 00:04:01.069 "timeout_us": 0, 00:04:01.069 "timeout_admin_us": 0, 00:04:01.069 "keep_alive_timeout_ms": 10000, 00:04:01.069 "arbitration_burst": 0, 00:04:01.069 "low_priority_weight": 0, 00:04:01.069 "medium_priority_weight": 0, 00:04:01.069 "high_priority_weight": 0, 00:04:01.069 "nvme_adminq_poll_period_us": 10000, 00:04:01.069 "nvme_ioq_poll_period_us": 0, 00:04:01.069 "io_queue_requests": 0, 00:04:01.069 "delay_cmd_submit": true, 00:04:01.069 "transport_retry_count": 4, 00:04:01.069 "bdev_retry_count": 3, 00:04:01.069 "transport_ack_timeout": 0, 00:04:01.069 "ctrlr_loss_timeout_sec": 0, 00:04:01.069 "reconnect_delay_sec": 0, 00:04:01.069 "fast_io_fail_timeout_sec": 0, 00:04:01.069 "disable_auto_failback": false, 00:04:01.069 "generate_uuids": false, 00:04:01.069 "transport_tos": 0, 00:04:01.069 "nvme_error_stat": false, 00:04:01.069 "rdma_srq_size": 0, 00:04:01.069 "io_path_stat": false, 00:04:01.069 "allow_accel_sequence": false, 00:04:01.069 "rdma_max_cq_size": 0, 00:04:01.070 "rdma_cm_event_timeout_ms": 0, 00:04:01.070 "dhchap_digests": [ 00:04:01.070 "sha256", 00:04:01.070 "sha384", 00:04:01.070 "sha512" 00:04:01.070 ], 00:04:01.070 "dhchap_dhgroups": [ 00:04:01.070 "null", 00:04:01.070 "ffdhe2048", 00:04:01.070 "ffdhe3072", 00:04:01.070 "ffdhe4096", 00:04:01.070 "ffdhe6144", 00:04:01.070 "ffdhe8192" 00:04:01.070 ] 00:04:01.070 } 00:04:01.070 }, 00:04:01.070 { 00:04:01.070 "method": "bdev_nvme_set_hotplug", 00:04:01.070 "params": { 00:04:01.070 "period_us": 100000, 00:04:01.070 "enable": false 00:04:01.070 } 00:04:01.070 }, 00:04:01.070 { 00:04:01.070 "method": "bdev_wait_for_examine" 00:04:01.070 } 00:04:01.070 ] 00:04:01.070 }, 00:04:01.070 { 00:04:01.070 "subsystem": "scsi", 00:04:01.070 "config": null 00:04:01.070 }, 00:04:01.070 { 00:04:01.070 "subsystem": "scheduler", 00:04:01.070 "config": [ 00:04:01.070 { 00:04:01.070 "method": "framework_set_scheduler", 00:04:01.070 "params": { 00:04:01.070 "name": "static" 00:04:01.070 } 00:04:01.070 } 00:04:01.070 ] 00:04:01.070 }, 00:04:01.070 { 00:04:01.070 "subsystem": "vhost_scsi", 00:04:01.070 "config": [] 00:04:01.070 }, 00:04:01.070 { 00:04:01.070 "subsystem": "vhost_blk", 00:04:01.070 "config": [] 00:04:01.070 }, 00:04:01.070 { 00:04:01.070 "subsystem": "ublk", 00:04:01.070 "config": [] 00:04:01.070 }, 00:04:01.070 { 00:04:01.070 "subsystem": "nbd", 00:04:01.070 "config": [] 00:04:01.070 }, 00:04:01.070 { 00:04:01.070 "subsystem": "nvmf", 00:04:01.070 "config": [ 00:04:01.070 { 00:04:01.070 "method": "nvmf_set_config", 00:04:01.070 "params": { 00:04:01.070 "discovery_filter": "match_any", 00:04:01.070 "admin_cmd_passthru": { 00:04:01.070 "identify_ctrlr": false 00:04:01.070 }, 00:04:01.070 "dhchap_digests": [ 00:04:01.070 "sha256", 00:04:01.070 "sha384", 00:04:01.070 "sha512" 00:04:01.070 ], 00:04:01.070 "dhchap_dhgroups": [ 00:04:01.070 "null", 00:04:01.070 "ffdhe2048", 00:04:01.070 "ffdhe3072", 00:04:01.070 "ffdhe4096", 00:04:01.070 "ffdhe6144", 00:04:01.070 "ffdhe8192" 00:04:01.070 ] 00:04:01.070 } 00:04:01.070 }, 00:04:01.070 { 00:04:01.070 "method": "nvmf_set_max_subsystems", 00:04:01.070 "params": { 00:04:01.070 "max_subsystems": 1024 00:04:01.070 } 00:04:01.070 }, 00:04:01.070 { 00:04:01.070 "method": "nvmf_set_crdt", 00:04:01.070 "params": { 00:04:01.070 "crdt1": 0, 00:04:01.070 "crdt2": 0, 00:04:01.070 "crdt3": 0 00:04:01.070 } 00:04:01.070 }, 00:04:01.070 { 00:04:01.070 "method": "nvmf_create_transport", 00:04:01.070 "params": { 00:04:01.070 "trtype": "TCP", 00:04:01.070 "max_queue_depth": 128, 00:04:01.070 "max_io_qpairs_per_ctrlr": 127, 00:04:01.070 "in_capsule_data_size": 4096, 00:04:01.070 "max_io_size": 131072, 00:04:01.070 "io_unit_size": 131072, 00:04:01.070 "max_aq_depth": 128, 00:04:01.070 "num_shared_buffers": 511, 00:04:01.070 "buf_cache_size": 4294967295, 00:04:01.070 "dif_insert_or_strip": false, 00:04:01.070 "zcopy": false, 00:04:01.070 "c2h_success": true, 00:04:01.070 "sock_priority": 0, 00:04:01.070 "abort_timeout_sec": 1, 00:04:01.070 "ack_timeout": 0, 00:04:01.070 "data_wr_pool_size": 0 00:04:01.070 } 00:04:01.070 } 00:04:01.070 ] 00:04:01.070 }, 00:04:01.070 { 00:04:01.070 "subsystem": "iscsi", 00:04:01.070 "config": [ 00:04:01.070 { 00:04:01.070 "method": "iscsi_set_options", 00:04:01.070 "params": { 00:04:01.070 "node_base": "iqn.2016-06.io.spdk", 00:04:01.070 "max_sessions": 128, 00:04:01.070 "max_connections_per_session": 2, 00:04:01.070 "max_queue_depth": 64, 00:04:01.070 "default_time2wait": 2, 00:04:01.070 "default_time2retain": 20, 00:04:01.070 "first_burst_length": 8192, 00:04:01.070 "immediate_data": true, 00:04:01.070 "allow_duplicated_isid": false, 00:04:01.070 "error_recovery_level": 0, 00:04:01.070 "nop_timeout": 60, 00:04:01.070 "nop_in_interval": 30, 00:04:01.070 "disable_chap": false, 00:04:01.070 "require_chap": false, 00:04:01.070 "mutual_chap": false, 00:04:01.070 "chap_group": 0, 00:04:01.070 "max_large_datain_per_connection": 64, 00:04:01.070 "max_r2t_per_connection": 4, 00:04:01.070 "pdu_pool_size": 36864, 00:04:01.070 "immediate_data_pool_size": 16384, 00:04:01.070 "data_out_pool_size": 2048 00:04:01.070 } 00:04:01.070 } 00:04:01.070 ] 00:04:01.070 } 00:04:01.070 ] 00:04:01.070 } 00:04:01.070 13:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:01.070 13:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1909466 00:04:01.070 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1909466 ']' 00:04:01.070 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1909466 00:04:01.070 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:01.070 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.070 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1909466 00:04:01.070 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.070 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.070 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1909466' 00:04:01.070 killing process with pid 1909466 00:04:01.070 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1909466 00:04:01.070 13:11:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1909466 00:04:01.330 13:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1909738 00:04:01.330 13:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:01.330 13:11:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:06.631 13:11:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1909738 00:04:06.631 13:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1909738 ']' 00:04:06.631 13:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1909738 00:04:06.631 13:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:06.631 13:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.631 13:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1909738 00:04:06.631 13:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.631 13:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.631 13:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1909738' 00:04:06.631 killing process with pid 1909738 00:04:06.631 13:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1909738 00:04:06.631 13:11:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1909738 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:06.631 00:04:06.631 real 0m6.548s 00:04:06.631 user 0m6.452s 00:04:06.631 sys 0m0.551s 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.631 ************************************ 00:04:06.631 END TEST skip_rpc_with_json 00:04:06.631 ************************************ 00:04:06.631 13:11:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:06.631 13:11:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.631 13:11:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.631 13:11:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.631 ************************************ 00:04:06.631 START TEST skip_rpc_with_delay 00:04:06.631 ************************************ 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:06.631 [2024-12-06 13:11:53.265248] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:06.631 00:04:06.631 real 0m0.081s 00:04:06.631 user 0m0.047s 00:04:06.631 sys 0m0.033s 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.631 13:11:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:06.631 ************************************ 00:04:06.631 END TEST skip_rpc_with_delay 00:04:06.632 ************************************ 00:04:06.892 13:11:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:06.892 13:11:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:06.892 13:11:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:06.892 13:11:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.892 13:11:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.892 13:11:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.892 ************************************ 00:04:06.892 START TEST exit_on_failed_rpc_init 00:04:06.892 ************************************ 00:04:06.892 13:11:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:06.892 13:11:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1910911 00:04:06.892 13:11:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1910911 00:04:06.892 13:11:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:06.892 13:11:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1910911 ']' 00:04:06.892 13:11:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.892 13:11:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.892 13:11:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.892 13:11:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.892 13:11:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:06.892 [2024-12-06 13:11:53.431746] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:06.892 [2024-12-06 13:11:53.431819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910911 ] 00:04:06.892 [2024-12-06 13:11:53.520374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.152 [2024-12-06 13:11:53.555146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:07.722 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:07.722 [2024-12-06 13:11:54.281346] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:07.722 [2024-12-06 13:11:54.281395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1911134 ] 00:04:07.722 [2024-12-06 13:11:54.368996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.983 [2024-12-06 13:11:54.404706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.983 [2024-12-06 13:11:54.404754] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:07.983 [2024-12-06 13:11:54.404764] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:07.983 [2024-12-06 13:11:54.404771] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1910911 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1910911 ']' 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1910911 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1910911 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1910911' 00:04:07.983 killing process with pid 1910911 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1910911 00:04:07.983 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1910911 00:04:08.243 00:04:08.243 real 0m1.323s 00:04:08.243 user 0m1.539s 00:04:08.243 sys 0m0.390s 00:04:08.243 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.243 13:11:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:08.243 ************************************ 00:04:08.243 END TEST exit_on_failed_rpc_init 00:04:08.243 ************************************ 00:04:08.243 13:11:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.243 00:04:08.243 real 0m13.748s 00:04:08.243 user 0m13.272s 00:04:08.243 sys 0m1.612s 00:04:08.243 13:11:54 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.243 13:11:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.243 ************************************ 00:04:08.243 END TEST skip_rpc 00:04:08.243 ************************************ 00:04:08.243 13:11:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:08.243 13:11:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.243 13:11:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.243 13:11:54 -- common/autotest_common.sh@10 -- # set +x 00:04:08.243 ************************************ 00:04:08.243 START TEST rpc_client 00:04:08.243 ************************************ 00:04:08.243 13:11:54 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:08.503 * Looking for test storage... 00:04:08.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:08.503 13:11:54 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:08.503 13:11:54 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:08.503 13:11:54 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:08.503 13:11:54 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:08.503 13:11:54 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.503 13:11:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:08.503 13:11:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:08.503 13:11:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.503 13:11:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:08.503 13:11:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.503 13:11:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.503 13:11:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.503 13:11:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:08.503 13:11:55 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.503 13:11:55 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:08.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.503 --rc genhtml_branch_coverage=1 00:04:08.503 --rc genhtml_function_coverage=1 00:04:08.503 --rc genhtml_legend=1 00:04:08.503 --rc geninfo_all_blocks=1 00:04:08.503 --rc geninfo_unexecuted_blocks=1 00:04:08.503 00:04:08.503 ' 00:04:08.503 13:11:55 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:08.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.503 --rc genhtml_branch_coverage=1 00:04:08.503 --rc genhtml_function_coverage=1 00:04:08.503 --rc genhtml_legend=1 00:04:08.503 --rc geninfo_all_blocks=1 00:04:08.503 --rc geninfo_unexecuted_blocks=1 00:04:08.503 00:04:08.503 ' 00:04:08.503 13:11:55 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:08.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.503 --rc genhtml_branch_coverage=1 00:04:08.503 --rc genhtml_function_coverage=1 00:04:08.503 --rc genhtml_legend=1 00:04:08.503 --rc geninfo_all_blocks=1 00:04:08.503 --rc geninfo_unexecuted_blocks=1 00:04:08.503 00:04:08.503 ' 00:04:08.503 13:11:55 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:08.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.503 --rc genhtml_branch_coverage=1 00:04:08.503 --rc genhtml_function_coverage=1 00:04:08.503 --rc genhtml_legend=1 00:04:08.503 --rc geninfo_all_blocks=1 00:04:08.503 --rc geninfo_unexecuted_blocks=1 00:04:08.503 00:04:08.503 ' 00:04:08.504 13:11:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:08.504 OK 00:04:08.504 13:11:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:08.504 00:04:08.504 real 0m0.224s 00:04:08.504 user 0m0.125s 00:04:08.504 sys 0m0.113s 00:04:08.504 13:11:55 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.504 13:11:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:08.504 ************************************ 00:04:08.504 END TEST rpc_client 00:04:08.504 ************************************ 00:04:08.504 13:11:55 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:08.504 13:11:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.504 13:11:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.504 13:11:55 -- common/autotest_common.sh@10 -- # set +x 00:04:08.504 ************************************ 00:04:08.504 START TEST json_config 00:04:08.504 ************************************ 00:04:08.504 13:11:55 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:08.764 13:11:55 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:08.764 13:11:55 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:08.764 13:11:55 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:08.764 13:11:55 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:08.764 13:11:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.764 13:11:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.764 13:11:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.764 13:11:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.764 13:11:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.764 13:11:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.764 13:11:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.764 13:11:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.764 13:11:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.764 13:11:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.764 13:11:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.764 13:11:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:08.764 13:11:55 json_config -- scripts/common.sh@345 -- # : 1 00:04:08.764 13:11:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.764 13:11:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.764 13:11:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:08.764 13:11:55 json_config -- scripts/common.sh@353 -- # local d=1 00:04:08.764 13:11:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.764 13:11:55 json_config -- scripts/common.sh@355 -- # echo 1 00:04:08.764 13:11:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.764 13:11:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:08.764 13:11:55 json_config -- scripts/common.sh@353 -- # local d=2 00:04:08.764 13:11:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.764 13:11:55 json_config -- scripts/common.sh@355 -- # echo 2 00:04:08.764 13:11:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.764 13:11:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.764 13:11:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.764 13:11:55 json_config -- scripts/common.sh@368 -- # return 0 00:04:08.764 13:11:55 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.764 13:11:55 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:08.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.764 --rc genhtml_branch_coverage=1 00:04:08.764 --rc genhtml_function_coverage=1 00:04:08.764 --rc genhtml_legend=1 00:04:08.764 --rc geninfo_all_blocks=1 00:04:08.764 --rc geninfo_unexecuted_blocks=1 00:04:08.764 00:04:08.764 ' 00:04:08.764 13:11:55 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:08.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.764 --rc genhtml_branch_coverage=1 00:04:08.764 --rc genhtml_function_coverage=1 00:04:08.764 --rc genhtml_legend=1 00:04:08.764 --rc geninfo_all_blocks=1 00:04:08.764 --rc geninfo_unexecuted_blocks=1 00:04:08.764 00:04:08.764 ' 00:04:08.764 13:11:55 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:08.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.764 --rc genhtml_branch_coverage=1 00:04:08.764 --rc genhtml_function_coverage=1 00:04:08.764 --rc genhtml_legend=1 00:04:08.764 --rc geninfo_all_blocks=1 00:04:08.764 --rc geninfo_unexecuted_blocks=1 00:04:08.764 00:04:08.764 ' 00:04:08.764 13:11:55 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:08.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.764 --rc genhtml_branch_coverage=1 00:04:08.764 --rc genhtml_function_coverage=1 00:04:08.764 --rc genhtml_legend=1 00:04:08.764 --rc geninfo_all_blocks=1 00:04:08.764 --rc geninfo_unexecuted_blocks=1 00:04:08.764 00:04:08.764 ' 00:04:08.764 13:11:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:08.764 13:11:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:08.764 13:11:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:08.764 13:11:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.764 13:11:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.764 13:11:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.764 13:11:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.764 13:11:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.764 13:11:55 json_config -- paths/export.sh@5 -- # export PATH 00:04:08.764 13:11:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@51 -- # : 0 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:08.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:08.764 13:11:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:08.764 13:11:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:08.765 INFO: JSON configuration test init 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:08.765 13:11:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.765 13:11:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:08.765 13:11:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.765 13:11:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.765 13:11:55 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:08.765 13:11:55 json_config -- json_config/common.sh@9 -- # local app=target 00:04:08.765 13:11:55 json_config -- json_config/common.sh@10 -- # shift 00:04:08.765 13:11:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:08.765 13:11:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:08.765 13:11:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:08.765 13:11:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:08.765 13:11:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:08.765 13:11:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1911499 00:04:08.765 13:11:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:08.765 Waiting for target to run... 00:04:08.765 13:11:55 json_config -- json_config/common.sh@25 -- # waitforlisten 1911499 /var/tmp/spdk_tgt.sock 00:04:08.765 13:11:55 json_config -- common/autotest_common.sh@835 -- # '[' -z 1911499 ']' 00:04:08.765 13:11:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:08.765 13:11:55 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:08.765 13:11:55 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.765 13:11:55 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:08.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:08.765 13:11:55 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.765 13:11:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.765 [2024-12-06 13:11:55.401572] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:08.765 [2024-12-06 13:11:55.401647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1911499 ] 00:04:09.335 [2024-12-06 13:11:55.755578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.335 [2024-12-06 13:11:55.780392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.595 13:11:56 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.595 13:11:56 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:09.595 13:11:56 json_config -- json_config/common.sh@26 -- # echo '' 00:04:09.595 00:04:09.595 13:11:56 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:09.595 13:11:56 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:09.595 13:11:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.595 13:11:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.595 13:11:56 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:09.595 13:11:56 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:09.595 13:11:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.595 13:11:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.595 13:11:56 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:09.855 13:11:56 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:09.855 13:11:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:10.425 13:11:56 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:10.425 13:11:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:10.425 13:11:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.425 13:11:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.425 13:11:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:10.425 13:11:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:10.425 13:11:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:10.425 13:11:56 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:10.425 13:11:56 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:10.425 13:11:56 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:10.425 13:11:56 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:10.425 13:11:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:10.425 13:11:56 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:10.426 13:11:56 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:10.426 13:11:56 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:10.426 13:11:56 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:10.426 13:11:56 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:10.426 13:11:56 json_config -- json_config/json_config.sh@54 -- # sort 00:04:10.426 13:11:56 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:10.426 13:11:56 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:10.426 13:11:56 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:10.426 13:11:56 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:10.426 13:11:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.426 13:11:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.426 13:11:57 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:10.426 13:11:57 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:10.426 13:11:57 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:10.426 13:11:57 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:10.426 13:11:57 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:10.426 13:11:57 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:10.426 13:11:57 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:10.426 13:11:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.426 13:11:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.426 13:11:57 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:10.426 13:11:57 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:10.426 13:11:57 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:10.426 13:11:57 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:10.426 13:11:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:10.686 MallocForNvmf0 00:04:10.686 13:11:57 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:10.686 13:11:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:10.946 MallocForNvmf1 00:04:10.946 13:11:57 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:10.946 13:11:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:10.946 [2024-12-06 13:11:57.538658] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:10.946 13:11:57 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:10.946 13:11:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:11.207 13:11:57 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:11.207 13:11:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:11.467 13:11:57 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:11.467 13:11:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:11.467 13:11:58 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:11.467 13:11:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:11.751 [2024-12-06 13:11:58.192788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:11.751 13:11:58 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:11.751 13:11:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.751 13:11:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.751 13:11:58 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:11.751 13:11:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.751 13:11:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.751 13:11:58 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:11.751 13:11:58 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:11.751 13:11:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:12.012 MallocBdevForConfigChangeCheck 00:04:12.012 13:11:58 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:12.012 13:11:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:12.012 13:11:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.012 13:11:58 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:12.012 13:11:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.272 13:11:58 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:12.272 INFO: shutting down applications... 00:04:12.272 13:11:58 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:12.272 13:11:58 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:12.272 13:11:58 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:12.272 13:11:58 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:12.841 Calling clear_iscsi_subsystem 00:04:12.841 Calling clear_nvmf_subsystem 00:04:12.841 Calling clear_nbd_subsystem 00:04:12.841 Calling clear_ublk_subsystem 00:04:12.841 Calling clear_vhost_blk_subsystem 00:04:12.841 Calling clear_vhost_scsi_subsystem 00:04:12.841 Calling clear_bdev_subsystem 00:04:12.841 13:11:59 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:12.841 13:11:59 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:12.841 13:11:59 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:12.841 13:11:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.841 13:11:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:12.841 13:11:59 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:13.101 13:11:59 json_config -- json_config/json_config.sh@352 -- # break 00:04:13.101 13:11:59 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:13.101 13:11:59 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:13.101 13:11:59 json_config -- json_config/common.sh@31 -- # local app=target 00:04:13.101 13:11:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:13.101 13:11:59 json_config -- json_config/common.sh@35 -- # [[ -n 1911499 ]] 00:04:13.101 13:11:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1911499 00:04:13.101 13:11:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:13.101 13:11:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.101 13:11:59 json_config -- json_config/common.sh@41 -- # kill -0 1911499 00:04:13.101 13:11:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:13.672 13:12:00 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:13.672 13:12:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.672 13:12:00 json_config -- json_config/common.sh@41 -- # kill -0 1911499 00:04:13.672 13:12:00 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:13.672 13:12:00 json_config -- json_config/common.sh@43 -- # break 00:04:13.672 13:12:00 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:13.672 13:12:00 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:13.672 SPDK target shutdown done 00:04:13.672 13:12:00 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:13.672 INFO: relaunching applications... 00:04:13.672 13:12:00 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.672 13:12:00 json_config -- json_config/common.sh@9 -- # local app=target 00:04:13.672 13:12:00 json_config -- json_config/common.sh@10 -- # shift 00:04:13.672 13:12:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:13.672 13:12:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:13.672 13:12:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:13.672 13:12:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.672 13:12:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.672 13:12:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1912482 00:04:13.672 13:12:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:13.672 Waiting for target to run... 00:04:13.672 13:12:00 json_config -- json_config/common.sh@25 -- # waitforlisten 1912482 /var/tmp/spdk_tgt.sock 00:04:13.672 13:12:00 json_config -- common/autotest_common.sh@835 -- # '[' -z 1912482 ']' 00:04:13.672 13:12:00 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:13.672 13:12:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.672 13:12:00 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.672 13:12:00 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:13.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:13.672 13:12:00 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.672 13:12:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.672 [2024-12-06 13:12:00.146320] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:13.672 [2024-12-06 13:12:00.146399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912482 ] 00:04:13.932 [2024-12-06 13:12:00.491960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.932 [2024-12-06 13:12:00.522861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.503 [2024-12-06 13:12:01.022370] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:14.504 [2024-12-06 13:12:01.054767] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:14.504 13:12:01 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.504 13:12:01 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:14.504 13:12:01 json_config -- json_config/common.sh@26 -- # echo '' 00:04:14.504 00:04:14.504 13:12:01 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:14.504 13:12:01 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:14.504 INFO: Checking if target configuration is the same... 00:04:14.504 13:12:01 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.504 13:12:01 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:14.504 13:12:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:14.504 + '[' 2 -ne 2 ']' 00:04:14.504 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:14.504 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:14.504 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:14.504 +++ basename /dev/fd/62 00:04:14.504 ++ mktemp /tmp/62.XXX 00:04:14.504 + tmp_file_1=/tmp/62.XL0 00:04:14.504 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.504 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:14.504 + tmp_file_2=/tmp/spdk_tgt_config.json.O1w 00:04:14.504 + ret=0 00:04:14.504 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:15.072 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:15.072 + diff -u /tmp/62.XL0 /tmp/spdk_tgt_config.json.O1w 00:04:15.072 + echo 'INFO: JSON config files are the same' 00:04:15.072 INFO: JSON config files are the same 00:04:15.072 + rm /tmp/62.XL0 /tmp/spdk_tgt_config.json.O1w 00:04:15.072 + exit 0 00:04:15.072 13:12:01 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:15.072 13:12:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:15.072 INFO: changing configuration and checking if this can be detected... 00:04:15.072 13:12:01 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:15.072 13:12:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:15.072 13:12:01 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:15.072 13:12:01 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:15.072 13:12:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:15.072 + '[' 2 -ne 2 ']' 00:04:15.072 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:15.072 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:15.072 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:15.072 +++ basename /dev/fd/62 00:04:15.072 ++ mktemp /tmp/62.XXX 00:04:15.072 + tmp_file_1=/tmp/62.zc5 00:04:15.072 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:15.072 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:15.072 + tmp_file_2=/tmp/spdk_tgt_config.json.a6h 00:04:15.072 + ret=0 00:04:15.072 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:15.333 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:15.594 + diff -u /tmp/62.zc5 /tmp/spdk_tgt_config.json.a6h 00:04:15.594 + ret=1 00:04:15.594 + echo '=== Start of file: /tmp/62.zc5 ===' 00:04:15.594 + cat /tmp/62.zc5 00:04:15.594 + echo '=== End of file: /tmp/62.zc5 ===' 00:04:15.594 + echo '' 00:04:15.594 + echo '=== Start of file: /tmp/spdk_tgt_config.json.a6h ===' 00:04:15.594 + cat /tmp/spdk_tgt_config.json.a6h 00:04:15.594 + echo '=== End of file: /tmp/spdk_tgt_config.json.a6h ===' 00:04:15.594 + echo '' 00:04:15.594 + rm /tmp/62.zc5 /tmp/spdk_tgt_config.json.a6h 00:04:15.594 + exit 1 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:15.594 INFO: configuration change detected. 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@324 -- # [[ -n 1912482 ]] 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.594 13:12:02 json_config -- json_config/json_config.sh@330 -- # killprocess 1912482 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@954 -- # '[' -z 1912482 ']' 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@958 -- # kill -0 1912482 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@959 -- # uname 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1912482 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1912482' 00:04:15.594 killing process with pid 1912482 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@973 -- # kill 1912482 00:04:15.594 13:12:02 json_config -- common/autotest_common.sh@978 -- # wait 1912482 00:04:15.854 13:12:02 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:15.854 13:12:02 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:15.854 13:12:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.854 13:12:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.854 13:12:02 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:15.854 13:12:02 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:15.854 INFO: Success 00:04:15.854 00:04:15.854 real 0m7.362s 00:04:15.854 user 0m8.773s 00:04:15.854 sys 0m2.040s 00:04:15.854 13:12:02 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.854 13:12:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.854 ************************************ 00:04:15.854 END TEST json_config 00:04:15.854 ************************************ 00:04:16.115 13:12:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:16.115 13:12:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.115 13:12:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.115 13:12:02 -- common/autotest_common.sh@10 -- # set +x 00:04:16.115 ************************************ 00:04:16.115 START TEST json_config_extra_key 00:04:16.115 ************************************ 00:04:16.115 13:12:02 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:16.115 13:12:02 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.115 13:12:02 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.115 13:12:02 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:16.115 13:12:02 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.115 13:12:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:16.115 13:12:02 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.115 13:12:02 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:16.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.116 --rc genhtml_branch_coverage=1 00:04:16.116 --rc genhtml_function_coverage=1 00:04:16.116 --rc genhtml_legend=1 00:04:16.116 --rc geninfo_all_blocks=1 00:04:16.116 --rc geninfo_unexecuted_blocks=1 00:04:16.116 00:04:16.116 ' 00:04:16.116 13:12:02 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:16.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.116 --rc genhtml_branch_coverage=1 00:04:16.116 --rc genhtml_function_coverage=1 00:04:16.116 --rc genhtml_legend=1 00:04:16.116 --rc geninfo_all_blocks=1 00:04:16.116 --rc geninfo_unexecuted_blocks=1 00:04:16.116 00:04:16.116 ' 00:04:16.116 13:12:02 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:16.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.116 --rc genhtml_branch_coverage=1 00:04:16.116 --rc genhtml_function_coverage=1 00:04:16.116 --rc genhtml_legend=1 00:04:16.116 --rc geninfo_all_blocks=1 00:04:16.116 --rc geninfo_unexecuted_blocks=1 00:04:16.116 00:04:16.116 ' 00:04:16.116 13:12:02 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:16.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.116 --rc genhtml_branch_coverage=1 00:04:16.116 --rc genhtml_function_coverage=1 00:04:16.116 --rc genhtml_legend=1 00:04:16.116 --rc geninfo_all_blocks=1 00:04:16.116 --rc geninfo_unexecuted_blocks=1 00:04:16.116 00:04:16.116 ' 00:04:16.116 13:12:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:16.116 13:12:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:16.116 13:12:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:16.116 13:12:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:16.116 13:12:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:16.116 13:12:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.116 13:12:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.116 13:12:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.116 13:12:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:16.116 13:12:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:16.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:16.116 13:12:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:16.116 13:12:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:16.116 13:12:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:16.116 13:12:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:16.116 13:12:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:16.116 13:12:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:16.116 13:12:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:16.116 13:12:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:16.116 13:12:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:16.116 13:12:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:16.116 13:12:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:16.116 13:12:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:16.116 INFO: launching applications... 00:04:16.116 13:12:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:16.116 13:12:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:16.116 13:12:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:16.116 13:12:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:16.116 13:12:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:16.116 13:12:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:16.116 13:12:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.116 13:12:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.116 13:12:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1913306 00:04:16.116 13:12:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:16.116 Waiting for target to run... 00:04:16.116 13:12:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1913306 /var/tmp/spdk_tgt.sock 00:04:16.116 13:12:02 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1913306 ']' 00:04:16.116 13:12:02 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:16.116 13:12:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:16.116 13:12:02 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:16.116 13:12:02 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:16.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:16.116 13:12:02 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:16.116 13:12:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:16.377 [2024-12-06 13:12:02.826017] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:16.377 [2024-12-06 13:12:02.826093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913306 ] 00:04:16.637 [2024-12-06 13:12:03.113853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.637 [2024-12-06 13:12:03.137581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.207 13:12:03 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.207 13:12:03 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:17.207 13:12:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:17.207 00:04:17.207 13:12:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:17.207 INFO: shutting down applications... 00:04:17.207 13:12:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:17.207 13:12:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:17.207 13:12:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:17.207 13:12:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1913306 ]] 00:04:17.207 13:12:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1913306 00:04:17.207 13:12:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:17.207 13:12:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:17.207 13:12:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1913306 00:04:17.207 13:12:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:17.467 13:12:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:17.467 13:12:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:17.467 13:12:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1913306 00:04:17.467 13:12:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:17.467 13:12:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:17.467 13:12:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:17.467 13:12:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:17.467 SPDK target shutdown done 00:04:17.467 13:12:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:17.467 Success 00:04:17.467 00:04:17.467 real 0m1.570s 00:04:17.467 user 0m1.177s 00:04:17.467 sys 0m0.418s 00:04:17.727 13:12:04 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.727 13:12:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:17.727 ************************************ 00:04:17.727 END TEST json_config_extra_key 00:04:17.727 ************************************ 00:04:17.727 13:12:04 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:17.728 13:12:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.728 13:12:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.728 13:12:04 -- common/autotest_common.sh@10 -- # set +x 00:04:17.728 ************************************ 00:04:17.728 START TEST alias_rpc 00:04:17.728 ************************************ 00:04:17.728 13:12:04 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:17.728 * Looking for test storage... 00:04:17.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:17.728 13:12:04 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:17.728 13:12:04 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:17.728 13:12:04 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:17.728 13:12:04 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.728 13:12:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.988 13:12:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.988 13:12:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.988 13:12:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.988 13:12:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.988 13:12:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:17.988 13:12:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:17.988 13:12:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.988 13:12:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:17.988 13:12:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.988 13:12:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.988 13:12:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.988 13:12:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:17.988 13:12:04 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.988 13:12:04 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:17.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.988 --rc genhtml_branch_coverage=1 00:04:17.988 --rc genhtml_function_coverage=1 00:04:17.988 --rc genhtml_legend=1 00:04:17.988 --rc geninfo_all_blocks=1 00:04:17.988 --rc geninfo_unexecuted_blocks=1 00:04:17.988 00:04:17.988 ' 00:04:17.988 13:12:04 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:17.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.988 --rc genhtml_branch_coverage=1 00:04:17.988 --rc genhtml_function_coverage=1 00:04:17.988 --rc genhtml_legend=1 00:04:17.988 --rc geninfo_all_blocks=1 00:04:17.988 --rc geninfo_unexecuted_blocks=1 00:04:17.988 00:04:17.988 ' 00:04:17.988 13:12:04 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:17.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.988 --rc genhtml_branch_coverage=1 00:04:17.988 --rc genhtml_function_coverage=1 00:04:17.988 --rc genhtml_legend=1 00:04:17.988 --rc geninfo_all_blocks=1 00:04:17.988 --rc geninfo_unexecuted_blocks=1 00:04:17.988 00:04:17.988 ' 00:04:17.988 13:12:04 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:17.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.988 --rc genhtml_branch_coverage=1 00:04:17.988 --rc genhtml_function_coverage=1 00:04:17.988 --rc genhtml_legend=1 00:04:17.988 --rc geninfo_all_blocks=1 00:04:17.988 --rc geninfo_unexecuted_blocks=1 00:04:17.988 00:04:17.988 ' 00:04:17.988 13:12:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:17.988 13:12:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1913697 00:04:17.988 13:12:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1913697 00:04:17.988 13:12:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.988 13:12:04 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1913697 ']' 00:04:17.988 13:12:04 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.988 13:12:04 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.988 13:12:04 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.988 13:12:04 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.988 13:12:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.988 [2024-12-06 13:12:04.458053] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:17.988 [2024-12-06 13:12:04.458124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913697 ] 00:04:17.988 [2024-12-06 13:12:04.544741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.988 [2024-12-06 13:12:04.579125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.955 13:12:05 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.955 13:12:05 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:18.955 13:12:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:18.955 13:12:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1913697 00:04:18.955 13:12:05 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1913697 ']' 00:04:18.955 13:12:05 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1913697 00:04:18.955 13:12:05 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:18.955 13:12:05 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.955 13:12:05 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1913697 00:04:18.955 13:12:05 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.955 13:12:05 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.955 13:12:05 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1913697' 00:04:18.955 killing process with pid 1913697 00:04:18.955 13:12:05 alias_rpc -- common/autotest_common.sh@973 -- # kill 1913697 00:04:18.955 13:12:05 alias_rpc -- common/autotest_common.sh@978 -- # wait 1913697 00:04:19.215 00:04:19.215 real 0m1.479s 00:04:19.215 user 0m1.614s 00:04:19.215 sys 0m0.414s 00:04:19.215 13:12:05 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.215 13:12:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.215 ************************************ 00:04:19.215 END TEST alias_rpc 00:04:19.215 ************************************ 00:04:19.215 13:12:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:19.215 13:12:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:19.215 13:12:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.215 13:12:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.215 13:12:05 -- common/autotest_common.sh@10 -- # set +x 00:04:19.215 ************************************ 00:04:19.215 START TEST spdkcli_tcp 00:04:19.215 ************************************ 00:04:19.215 13:12:05 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:19.215 * Looking for test storage... 00:04:19.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:19.215 13:12:05 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:19.215 13:12:05 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:19.215 13:12:05 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:19.475 13:12:05 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.476 13:12:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:19.476 13:12:05 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.476 13:12:05 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:19.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.476 --rc genhtml_branch_coverage=1 00:04:19.476 --rc genhtml_function_coverage=1 00:04:19.476 --rc genhtml_legend=1 00:04:19.476 --rc geninfo_all_blocks=1 00:04:19.476 --rc geninfo_unexecuted_blocks=1 00:04:19.476 00:04:19.476 ' 00:04:19.476 13:12:05 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:19.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.476 --rc genhtml_branch_coverage=1 00:04:19.476 --rc genhtml_function_coverage=1 00:04:19.476 --rc genhtml_legend=1 00:04:19.476 --rc geninfo_all_blocks=1 00:04:19.476 --rc geninfo_unexecuted_blocks=1 00:04:19.476 00:04:19.476 ' 00:04:19.476 13:12:05 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:19.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.476 --rc genhtml_branch_coverage=1 00:04:19.476 --rc genhtml_function_coverage=1 00:04:19.476 --rc genhtml_legend=1 00:04:19.476 --rc geninfo_all_blocks=1 00:04:19.476 --rc geninfo_unexecuted_blocks=1 00:04:19.476 00:04:19.476 ' 00:04:19.476 13:12:05 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:19.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.476 --rc genhtml_branch_coverage=1 00:04:19.476 --rc genhtml_function_coverage=1 00:04:19.476 --rc genhtml_legend=1 00:04:19.476 --rc geninfo_all_blocks=1 00:04:19.476 --rc geninfo_unexecuted_blocks=1 00:04:19.476 00:04:19.476 ' 00:04:19.476 13:12:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:19.476 13:12:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:19.476 13:12:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:19.476 13:12:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:19.476 13:12:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:19.476 13:12:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:19.476 13:12:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:19.476 13:12:05 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.476 13:12:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:19.476 13:12:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1914096 00:04:19.476 13:12:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1914096 00:04:19.476 13:12:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:19.476 13:12:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1914096 ']' 00:04:19.476 13:12:05 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.476 13:12:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.476 13:12:05 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.476 13:12:05 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.476 13:12:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:19.476 [2024-12-06 13:12:06.014597] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:19.476 [2024-12-06 13:12:06.014651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914096 ] 00:04:19.476 [2024-12-06 13:12:06.101552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:19.736 [2024-12-06 13:12:06.136349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.736 [2024-12-06 13:12:06.136349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.307 13:12:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.307 13:12:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:20.307 13:12:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1914129 00:04:20.307 13:12:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:20.307 13:12:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:20.568 [ 00:04:20.568 "bdev_malloc_delete", 00:04:20.568 "bdev_malloc_create", 00:04:20.568 "bdev_null_resize", 00:04:20.568 "bdev_null_delete", 00:04:20.568 "bdev_null_create", 00:04:20.568 "bdev_nvme_cuse_unregister", 00:04:20.568 "bdev_nvme_cuse_register", 00:04:20.568 "bdev_opal_new_user", 00:04:20.568 "bdev_opal_set_lock_state", 00:04:20.568 "bdev_opal_delete", 00:04:20.568 "bdev_opal_get_info", 00:04:20.568 "bdev_opal_create", 00:04:20.568 "bdev_nvme_opal_revert", 00:04:20.568 "bdev_nvme_opal_init", 00:04:20.568 "bdev_nvme_send_cmd", 00:04:20.568 "bdev_nvme_set_keys", 00:04:20.568 "bdev_nvme_get_path_iostat", 00:04:20.568 "bdev_nvme_get_mdns_discovery_info", 00:04:20.568 "bdev_nvme_stop_mdns_discovery", 00:04:20.568 "bdev_nvme_start_mdns_discovery", 00:04:20.568 "bdev_nvme_set_multipath_policy", 00:04:20.568 "bdev_nvme_set_preferred_path", 00:04:20.568 "bdev_nvme_get_io_paths", 00:04:20.568 "bdev_nvme_remove_error_injection", 00:04:20.568 "bdev_nvme_add_error_injection", 00:04:20.568 "bdev_nvme_get_discovery_info", 00:04:20.568 "bdev_nvme_stop_discovery", 00:04:20.568 "bdev_nvme_start_discovery", 00:04:20.568 "bdev_nvme_get_controller_health_info", 00:04:20.568 "bdev_nvme_disable_controller", 00:04:20.568 "bdev_nvme_enable_controller", 00:04:20.568 "bdev_nvme_reset_controller", 00:04:20.568 "bdev_nvme_get_transport_statistics", 00:04:20.568 "bdev_nvme_apply_firmware", 00:04:20.568 "bdev_nvme_detach_controller", 00:04:20.568 "bdev_nvme_get_controllers", 00:04:20.568 "bdev_nvme_attach_controller", 00:04:20.568 "bdev_nvme_set_hotplug", 00:04:20.568 "bdev_nvme_set_options", 00:04:20.568 "bdev_passthru_delete", 00:04:20.568 "bdev_passthru_create", 00:04:20.568 "bdev_lvol_set_parent_bdev", 00:04:20.568 "bdev_lvol_set_parent", 00:04:20.568 "bdev_lvol_check_shallow_copy", 00:04:20.568 "bdev_lvol_start_shallow_copy", 00:04:20.568 "bdev_lvol_grow_lvstore", 00:04:20.568 "bdev_lvol_get_lvols", 00:04:20.568 "bdev_lvol_get_lvstores", 00:04:20.568 "bdev_lvol_delete", 00:04:20.568 "bdev_lvol_set_read_only", 00:04:20.568 "bdev_lvol_resize", 00:04:20.568 "bdev_lvol_decouple_parent", 00:04:20.568 "bdev_lvol_inflate", 00:04:20.568 "bdev_lvol_rename", 00:04:20.568 "bdev_lvol_clone_bdev", 00:04:20.568 "bdev_lvol_clone", 00:04:20.568 "bdev_lvol_snapshot", 00:04:20.568 "bdev_lvol_create", 00:04:20.568 "bdev_lvol_delete_lvstore", 00:04:20.568 "bdev_lvol_rename_lvstore", 00:04:20.568 "bdev_lvol_create_lvstore", 00:04:20.568 "bdev_raid_set_options", 00:04:20.568 "bdev_raid_remove_base_bdev", 00:04:20.568 "bdev_raid_add_base_bdev", 00:04:20.568 "bdev_raid_delete", 00:04:20.568 "bdev_raid_create", 00:04:20.568 "bdev_raid_get_bdevs", 00:04:20.568 "bdev_error_inject_error", 00:04:20.568 "bdev_error_delete", 00:04:20.568 "bdev_error_create", 00:04:20.568 "bdev_split_delete", 00:04:20.568 "bdev_split_create", 00:04:20.568 "bdev_delay_delete", 00:04:20.568 "bdev_delay_create", 00:04:20.568 "bdev_delay_update_latency", 00:04:20.568 "bdev_zone_block_delete", 00:04:20.568 "bdev_zone_block_create", 00:04:20.568 "blobfs_create", 00:04:20.568 "blobfs_detect", 00:04:20.568 "blobfs_set_cache_size", 00:04:20.568 "bdev_aio_delete", 00:04:20.568 "bdev_aio_rescan", 00:04:20.568 "bdev_aio_create", 00:04:20.568 "bdev_ftl_set_property", 00:04:20.568 "bdev_ftl_get_properties", 00:04:20.568 "bdev_ftl_get_stats", 00:04:20.568 "bdev_ftl_unmap", 00:04:20.568 "bdev_ftl_unload", 00:04:20.568 "bdev_ftl_delete", 00:04:20.568 "bdev_ftl_load", 00:04:20.568 "bdev_ftl_create", 00:04:20.568 "bdev_virtio_attach_controller", 00:04:20.568 "bdev_virtio_scsi_get_devices", 00:04:20.568 "bdev_virtio_detach_controller", 00:04:20.568 "bdev_virtio_blk_set_hotplug", 00:04:20.568 "bdev_iscsi_delete", 00:04:20.568 "bdev_iscsi_create", 00:04:20.568 "bdev_iscsi_set_options", 00:04:20.568 "accel_error_inject_error", 00:04:20.568 "ioat_scan_accel_module", 00:04:20.568 "dsa_scan_accel_module", 00:04:20.568 "iaa_scan_accel_module", 00:04:20.568 "vfu_virtio_create_fs_endpoint", 00:04:20.568 "vfu_virtio_create_scsi_endpoint", 00:04:20.568 "vfu_virtio_scsi_remove_target", 00:04:20.568 "vfu_virtio_scsi_add_target", 00:04:20.568 "vfu_virtio_create_blk_endpoint", 00:04:20.568 "vfu_virtio_delete_endpoint", 00:04:20.568 "keyring_file_remove_key", 00:04:20.568 "keyring_file_add_key", 00:04:20.568 "keyring_linux_set_options", 00:04:20.568 "fsdev_aio_delete", 00:04:20.568 "fsdev_aio_create", 00:04:20.568 "iscsi_get_histogram", 00:04:20.568 "iscsi_enable_histogram", 00:04:20.568 "iscsi_set_options", 00:04:20.568 "iscsi_get_auth_groups", 00:04:20.568 "iscsi_auth_group_remove_secret", 00:04:20.568 "iscsi_auth_group_add_secret", 00:04:20.568 "iscsi_delete_auth_group", 00:04:20.568 "iscsi_create_auth_group", 00:04:20.568 "iscsi_set_discovery_auth", 00:04:20.568 "iscsi_get_options", 00:04:20.568 "iscsi_target_node_request_logout", 00:04:20.568 "iscsi_target_node_set_redirect", 00:04:20.568 "iscsi_target_node_set_auth", 00:04:20.568 "iscsi_target_node_add_lun", 00:04:20.568 "iscsi_get_stats", 00:04:20.568 "iscsi_get_connections", 00:04:20.568 "iscsi_portal_group_set_auth", 00:04:20.568 "iscsi_start_portal_group", 00:04:20.568 "iscsi_delete_portal_group", 00:04:20.568 "iscsi_create_portal_group", 00:04:20.568 "iscsi_get_portal_groups", 00:04:20.568 "iscsi_delete_target_node", 00:04:20.568 "iscsi_target_node_remove_pg_ig_maps", 00:04:20.568 "iscsi_target_node_add_pg_ig_maps", 00:04:20.568 "iscsi_create_target_node", 00:04:20.568 "iscsi_get_target_nodes", 00:04:20.568 "iscsi_delete_initiator_group", 00:04:20.568 "iscsi_initiator_group_remove_initiators", 00:04:20.568 "iscsi_initiator_group_add_initiators", 00:04:20.568 "iscsi_create_initiator_group", 00:04:20.568 "iscsi_get_initiator_groups", 00:04:20.568 "nvmf_set_crdt", 00:04:20.568 "nvmf_set_config", 00:04:20.568 "nvmf_set_max_subsystems", 00:04:20.568 "nvmf_stop_mdns_prr", 00:04:20.568 "nvmf_publish_mdns_prr", 00:04:20.568 "nvmf_subsystem_get_listeners", 00:04:20.568 "nvmf_subsystem_get_qpairs", 00:04:20.568 "nvmf_subsystem_get_controllers", 00:04:20.568 "nvmf_get_stats", 00:04:20.568 "nvmf_get_transports", 00:04:20.568 "nvmf_create_transport", 00:04:20.568 "nvmf_get_targets", 00:04:20.568 "nvmf_delete_target", 00:04:20.568 "nvmf_create_target", 00:04:20.568 "nvmf_subsystem_allow_any_host", 00:04:20.568 "nvmf_subsystem_set_keys", 00:04:20.568 "nvmf_subsystem_remove_host", 00:04:20.568 "nvmf_subsystem_add_host", 00:04:20.568 "nvmf_ns_remove_host", 00:04:20.568 "nvmf_ns_add_host", 00:04:20.568 "nvmf_subsystem_remove_ns", 00:04:20.568 "nvmf_subsystem_set_ns_ana_group", 00:04:20.568 "nvmf_subsystem_add_ns", 00:04:20.568 "nvmf_subsystem_listener_set_ana_state", 00:04:20.568 "nvmf_discovery_get_referrals", 00:04:20.568 "nvmf_discovery_remove_referral", 00:04:20.568 "nvmf_discovery_add_referral", 00:04:20.568 "nvmf_subsystem_remove_listener", 00:04:20.568 "nvmf_subsystem_add_listener", 00:04:20.568 "nvmf_delete_subsystem", 00:04:20.568 "nvmf_create_subsystem", 00:04:20.568 "nvmf_get_subsystems", 00:04:20.568 "env_dpdk_get_mem_stats", 00:04:20.568 "nbd_get_disks", 00:04:20.568 "nbd_stop_disk", 00:04:20.568 "nbd_start_disk", 00:04:20.568 "ublk_recover_disk", 00:04:20.568 "ublk_get_disks", 00:04:20.568 "ublk_stop_disk", 00:04:20.568 "ublk_start_disk", 00:04:20.568 "ublk_destroy_target", 00:04:20.568 "ublk_create_target", 00:04:20.568 "virtio_blk_create_transport", 00:04:20.568 "virtio_blk_get_transports", 00:04:20.568 "vhost_controller_set_coalescing", 00:04:20.568 "vhost_get_controllers", 00:04:20.568 "vhost_delete_controller", 00:04:20.568 "vhost_create_blk_controller", 00:04:20.568 "vhost_scsi_controller_remove_target", 00:04:20.568 "vhost_scsi_controller_add_target", 00:04:20.568 "vhost_start_scsi_controller", 00:04:20.568 "vhost_create_scsi_controller", 00:04:20.568 "thread_set_cpumask", 00:04:20.568 "scheduler_set_options", 00:04:20.568 "framework_get_governor", 00:04:20.568 "framework_get_scheduler", 00:04:20.568 "framework_set_scheduler", 00:04:20.568 "framework_get_reactors", 00:04:20.568 "thread_get_io_channels", 00:04:20.568 "thread_get_pollers", 00:04:20.568 "thread_get_stats", 00:04:20.568 "framework_monitor_context_switch", 00:04:20.568 "spdk_kill_instance", 00:04:20.569 "log_enable_timestamps", 00:04:20.569 "log_get_flags", 00:04:20.569 "log_clear_flag", 00:04:20.569 "log_set_flag", 00:04:20.569 "log_get_level", 00:04:20.569 "log_set_level", 00:04:20.569 "log_get_print_level", 00:04:20.569 "log_set_print_level", 00:04:20.569 "framework_enable_cpumask_locks", 00:04:20.569 "framework_disable_cpumask_locks", 00:04:20.569 "framework_wait_init", 00:04:20.569 "framework_start_init", 00:04:20.569 "scsi_get_devices", 00:04:20.569 "bdev_get_histogram", 00:04:20.569 "bdev_enable_histogram", 00:04:20.569 "bdev_set_qos_limit", 00:04:20.569 "bdev_set_qd_sampling_period", 00:04:20.569 "bdev_get_bdevs", 00:04:20.569 "bdev_reset_iostat", 00:04:20.569 "bdev_get_iostat", 00:04:20.569 "bdev_examine", 00:04:20.569 "bdev_wait_for_examine", 00:04:20.569 "bdev_set_options", 00:04:20.569 "accel_get_stats", 00:04:20.569 "accel_set_options", 00:04:20.569 "accel_set_driver", 00:04:20.569 "accel_crypto_key_destroy", 00:04:20.569 "accel_crypto_keys_get", 00:04:20.569 "accel_crypto_key_create", 00:04:20.569 "accel_assign_opc", 00:04:20.569 "accel_get_module_info", 00:04:20.569 "accel_get_opc_assignments", 00:04:20.569 "vmd_rescan", 00:04:20.569 "vmd_remove_device", 00:04:20.569 "vmd_enable", 00:04:20.569 "sock_get_default_impl", 00:04:20.569 "sock_set_default_impl", 00:04:20.569 "sock_impl_set_options", 00:04:20.569 "sock_impl_get_options", 00:04:20.569 "iobuf_get_stats", 00:04:20.569 "iobuf_set_options", 00:04:20.569 "keyring_get_keys", 00:04:20.569 "vfu_tgt_set_base_path", 00:04:20.569 "framework_get_pci_devices", 00:04:20.569 "framework_get_config", 00:04:20.569 "framework_get_subsystems", 00:04:20.569 "fsdev_set_opts", 00:04:20.569 "fsdev_get_opts", 00:04:20.569 "trace_get_info", 00:04:20.569 "trace_get_tpoint_group_mask", 00:04:20.569 "trace_disable_tpoint_group", 00:04:20.569 "trace_enable_tpoint_group", 00:04:20.569 "trace_clear_tpoint_mask", 00:04:20.569 "trace_set_tpoint_mask", 00:04:20.569 "notify_get_notifications", 00:04:20.569 "notify_get_types", 00:04:20.569 "spdk_get_version", 00:04:20.569 "rpc_get_methods" 00:04:20.569 ] 00:04:20.569 13:12:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:20.569 13:12:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.569 13:12:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:20.569 13:12:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:20.569 13:12:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1914096 00:04:20.569 13:12:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1914096 ']' 00:04:20.569 13:12:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1914096 00:04:20.569 13:12:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:20.569 13:12:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.569 13:12:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1914096 00:04:20.569 13:12:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.569 13:12:07 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.569 13:12:07 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1914096' 00:04:20.569 killing process with pid 1914096 00:04:20.569 13:12:07 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1914096 00:04:20.569 13:12:07 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1914096 00:04:20.830 00:04:20.830 real 0m1.534s 00:04:20.830 user 0m2.808s 00:04:20.830 sys 0m0.467s 00:04:20.830 13:12:07 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.830 13:12:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:20.830 ************************************ 00:04:20.830 END TEST spdkcli_tcp 00:04:20.830 ************************************ 00:04:20.830 13:12:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:20.830 13:12:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.830 13:12:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.830 13:12:07 -- common/autotest_common.sh@10 -- # set +x 00:04:20.830 ************************************ 00:04:20.830 START TEST dpdk_mem_utility 00:04:20.830 ************************************ 00:04:20.830 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:20.830 * Looking for test storage... 00:04:20.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:20.830 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:20.830 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:20.830 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:21.091 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.091 13:12:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:21.091 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.091 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:21.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.091 --rc genhtml_branch_coverage=1 00:04:21.091 --rc genhtml_function_coverage=1 00:04:21.091 --rc genhtml_legend=1 00:04:21.091 --rc geninfo_all_blocks=1 00:04:21.091 --rc geninfo_unexecuted_blocks=1 00:04:21.091 00:04:21.091 ' 00:04:21.091 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:21.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.091 --rc genhtml_branch_coverage=1 00:04:21.091 --rc genhtml_function_coverage=1 00:04:21.091 --rc genhtml_legend=1 00:04:21.091 --rc geninfo_all_blocks=1 00:04:21.091 --rc geninfo_unexecuted_blocks=1 00:04:21.091 00:04:21.091 ' 00:04:21.091 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:21.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.091 --rc genhtml_branch_coverage=1 00:04:21.091 --rc genhtml_function_coverage=1 00:04:21.091 --rc genhtml_legend=1 00:04:21.091 --rc geninfo_all_blocks=1 00:04:21.091 --rc geninfo_unexecuted_blocks=1 00:04:21.091 00:04:21.091 ' 00:04:21.091 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:21.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.091 --rc genhtml_branch_coverage=1 00:04:21.091 --rc genhtml_function_coverage=1 00:04:21.091 --rc genhtml_legend=1 00:04:21.091 --rc geninfo_all_blocks=1 00:04:21.091 --rc geninfo_unexecuted_blocks=1 00:04:21.091 00:04:21.091 ' 00:04:21.091 13:12:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:21.091 13:12:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1914505 00:04:21.091 13:12:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1914505 00:04:21.091 13:12:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.091 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1914505 ']' 00:04:21.091 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.091 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.091 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.091 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.091 13:12:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:21.091 [2024-12-06 13:12:07.634491] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:21.091 [2024-12-06 13:12:07.634587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914505 ] 00:04:21.091 [2024-12-06 13:12:07.721040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.351 [2024-12-06 13:12:07.756116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.924 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.924 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:21.924 13:12:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:21.924 13:12:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:21.924 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.924 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:21.924 { 00:04:21.924 "filename": "/tmp/spdk_mem_dump.txt" 00:04:21.924 } 00:04:21.924 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.924 13:12:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:21.924 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:21.924 1 heaps totaling size 818.000000 MiB 00:04:21.924 size: 818.000000 MiB heap id: 0 00:04:21.924 end heaps---------- 00:04:21.924 9 mempools totaling size 603.782043 MiB 00:04:21.924 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:21.924 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:21.924 size: 100.555481 MiB name: bdev_io_1914505 00:04:21.924 size: 50.003479 MiB name: msgpool_1914505 00:04:21.924 size: 36.509338 MiB name: fsdev_io_1914505 00:04:21.924 size: 21.763794 MiB name: PDU_Pool 00:04:21.924 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:21.924 size: 4.133484 MiB name: evtpool_1914505 00:04:21.924 size: 0.026123 MiB name: Session_Pool 00:04:21.924 end mempools------- 00:04:21.924 6 memzones totaling size 4.142822 MiB 00:04:21.924 size: 1.000366 MiB name: RG_ring_0_1914505 00:04:21.924 size: 1.000366 MiB name: RG_ring_1_1914505 00:04:21.924 size: 1.000366 MiB name: RG_ring_4_1914505 00:04:21.924 size: 1.000366 MiB name: RG_ring_5_1914505 00:04:21.924 size: 0.125366 MiB name: RG_ring_2_1914505 00:04:21.924 size: 0.015991 MiB name: RG_ring_3_1914505 00:04:21.924 end memzones------- 00:04:21.924 13:12:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:21.924 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:21.924 list of free elements. size: 10.852478 MiB 00:04:21.924 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:21.924 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:21.924 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:21.924 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:21.924 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:21.924 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:21.924 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:21.924 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:21.924 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:21.924 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:21.924 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:21.924 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:21.924 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:21.924 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:21.924 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:21.924 list of standard malloc elements. size: 199.218628 MiB 00:04:21.924 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:21.924 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:21.924 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:21.924 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:21.924 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:21.924 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:21.924 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:21.924 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:21.924 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:21.924 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:21.924 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:21.924 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:21.924 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:21.924 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:21.924 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:21.924 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:21.924 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:21.924 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:21.924 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:21.924 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:21.924 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:21.924 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:21.924 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:21.924 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:21.924 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:21.924 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:21.924 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:21.924 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:21.924 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:21.924 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:21.924 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:21.924 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:21.924 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:21.924 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:21.924 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:21.924 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:21.924 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:21.924 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:21.924 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:21.924 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:21.924 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:21.924 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:21.924 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:21.924 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:21.924 list of memzone associated elements. size: 607.928894 MiB 00:04:21.924 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:21.924 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:21.924 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:21.924 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:21.924 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:21.924 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1914505_0 00:04:21.924 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:21.924 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1914505_0 00:04:21.924 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:21.924 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1914505_0 00:04:21.924 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:21.924 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:21.924 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:21.924 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:21.924 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:21.924 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1914505_0 00:04:21.924 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:21.924 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1914505 00:04:21.924 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:21.924 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1914505 00:04:21.924 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:21.924 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:21.924 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:21.924 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:21.924 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:21.924 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:21.924 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:21.924 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:21.924 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:21.924 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1914505 00:04:21.924 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:21.924 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1914505 00:04:21.924 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:21.924 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1914505 00:04:21.924 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:21.924 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1914505 00:04:21.924 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:21.924 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1914505 00:04:21.925 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:21.925 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1914505 00:04:21.925 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:21.925 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:21.925 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:21.925 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:21.925 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:21.925 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:21.925 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:21.925 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1914505 00:04:21.925 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:21.925 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1914505 00:04:21.925 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:21.925 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:21.925 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:21.925 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:21.925 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:21.925 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1914505 00:04:21.925 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:21.925 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:21.925 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:21.925 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1914505 00:04:21.925 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:21.925 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1914505 00:04:21.925 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:21.925 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1914505 00:04:21.925 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:21.925 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:21.925 13:12:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:21.925 13:12:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1914505 00:04:21.925 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1914505 ']' 00:04:21.925 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1914505 00:04:21.925 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:21.925 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.925 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1914505 00:04:22.232 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.232 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.232 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1914505' 00:04:22.232 killing process with pid 1914505 00:04:22.232 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1914505 00:04:22.232 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1914505 00:04:22.232 00:04:22.232 real 0m1.409s 00:04:22.232 user 0m1.484s 00:04:22.232 sys 0m0.424s 00:04:22.232 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.232 13:12:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:22.232 ************************************ 00:04:22.232 END TEST dpdk_mem_utility 00:04:22.232 ************************************ 00:04:22.232 13:12:08 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:22.232 13:12:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.232 13:12:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.232 13:12:08 -- common/autotest_common.sh@10 -- # set +x 00:04:22.232 ************************************ 00:04:22.232 START TEST event 00:04:22.232 ************************************ 00:04:22.232 13:12:08 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:22.528 * Looking for test storage... 00:04:22.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:22.528 13:12:08 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:22.528 13:12:08 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:22.528 13:12:08 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:22.528 13:12:09 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:22.528 13:12:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.528 13:12:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.528 13:12:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.528 13:12:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.528 13:12:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.528 13:12:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.528 13:12:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.528 13:12:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.528 13:12:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.528 13:12:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.528 13:12:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.528 13:12:09 event -- scripts/common.sh@344 -- # case "$op" in 00:04:22.528 13:12:09 event -- scripts/common.sh@345 -- # : 1 00:04:22.528 13:12:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.528 13:12:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.528 13:12:09 event -- scripts/common.sh@365 -- # decimal 1 00:04:22.528 13:12:09 event -- scripts/common.sh@353 -- # local d=1 00:04:22.528 13:12:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.528 13:12:09 event -- scripts/common.sh@355 -- # echo 1 00:04:22.528 13:12:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.528 13:12:09 event -- scripts/common.sh@366 -- # decimal 2 00:04:22.528 13:12:09 event -- scripts/common.sh@353 -- # local d=2 00:04:22.528 13:12:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.528 13:12:09 event -- scripts/common.sh@355 -- # echo 2 00:04:22.528 13:12:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.528 13:12:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.528 13:12:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.528 13:12:09 event -- scripts/common.sh@368 -- # return 0 00:04:22.528 13:12:09 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.528 13:12:09 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:22.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.528 --rc genhtml_branch_coverage=1 00:04:22.528 --rc genhtml_function_coverage=1 00:04:22.528 --rc genhtml_legend=1 00:04:22.528 --rc geninfo_all_blocks=1 00:04:22.528 --rc geninfo_unexecuted_blocks=1 00:04:22.528 00:04:22.528 ' 00:04:22.528 13:12:09 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:22.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.528 --rc genhtml_branch_coverage=1 00:04:22.528 --rc genhtml_function_coverage=1 00:04:22.528 --rc genhtml_legend=1 00:04:22.528 --rc geninfo_all_blocks=1 00:04:22.528 --rc geninfo_unexecuted_blocks=1 00:04:22.528 00:04:22.528 ' 00:04:22.528 13:12:09 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:22.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.528 --rc genhtml_branch_coverage=1 00:04:22.528 --rc genhtml_function_coverage=1 00:04:22.528 --rc genhtml_legend=1 00:04:22.528 --rc geninfo_all_blocks=1 00:04:22.528 --rc geninfo_unexecuted_blocks=1 00:04:22.528 00:04:22.528 ' 00:04:22.528 13:12:09 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:22.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.528 --rc genhtml_branch_coverage=1 00:04:22.528 --rc genhtml_function_coverage=1 00:04:22.528 --rc genhtml_legend=1 00:04:22.528 --rc geninfo_all_blocks=1 00:04:22.528 --rc geninfo_unexecuted_blocks=1 00:04:22.528 00:04:22.528 ' 00:04:22.528 13:12:09 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:22.528 13:12:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:22.528 13:12:09 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:22.528 13:12:09 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:22.528 13:12:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.528 13:12:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.528 ************************************ 00:04:22.528 START TEST event_perf 00:04:22.528 ************************************ 00:04:22.528 13:12:09 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:22.528 Running I/O for 1 seconds...[2024-12-06 13:12:09.124917] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:22.528 [2024-12-06 13:12:09.125031] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914968 ] 00:04:22.828 [2024-12-06 13:12:09.217157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:22.828 [2024-12-06 13:12:09.262280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.828 [2024-12-06 13:12:09.262439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:22.828 [2024-12-06 13:12:09.262609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:22.828 [2024-12-06 13:12:09.262736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.772 Running I/O for 1 seconds... 00:04:23.772 lcore 0: 171759 00:04:23.772 lcore 1: 171760 00:04:23.772 lcore 2: 171761 00:04:23.772 lcore 3: 171759 00:04:23.772 done. 00:04:23.772 00:04:23.772 real 0m1.189s 00:04:23.772 user 0m4.097s 00:04:23.772 sys 0m0.088s 00:04:23.772 13:12:10 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.772 13:12:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:23.772 ************************************ 00:04:23.772 END TEST event_perf 00:04:23.772 ************************************ 00:04:23.772 13:12:10 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:23.772 13:12:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:23.772 13:12:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.772 13:12:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.772 ************************************ 00:04:23.772 START TEST event_reactor 00:04:23.772 ************************************ 00:04:23.772 13:12:10 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:23.772 [2024-12-06 13:12:10.390969] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:23.772 [2024-12-06 13:12:10.391073] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1915380 ] 00:04:24.032 [2024-12-06 13:12:10.480157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.032 [2024-12-06 13:12:10.514999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.972 test_start 00:04:24.972 oneshot 00:04:24.972 tick 100 00:04:24.972 tick 100 00:04:24.972 tick 250 00:04:24.972 tick 100 00:04:24.972 tick 100 00:04:24.972 tick 250 00:04:24.972 tick 100 00:04:24.972 tick 500 00:04:24.972 tick 100 00:04:24.972 tick 100 00:04:24.972 tick 250 00:04:24.972 tick 100 00:04:24.972 tick 100 00:04:24.972 test_end 00:04:24.972 00:04:24.972 real 0m1.173s 00:04:24.972 user 0m1.084s 00:04:24.972 sys 0m0.084s 00:04:24.972 13:12:11 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.972 13:12:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:24.972 ************************************ 00:04:24.972 END TEST event_reactor 00:04:24.972 ************************************ 00:04:24.972 13:12:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:24.972 13:12:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:24.972 13:12:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.972 13:12:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.972 ************************************ 00:04:24.972 START TEST event_reactor_perf 00:04:24.972 ************************************ 00:04:24.972 13:12:11 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:25.233 [2024-12-06 13:12:11.645281] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:25.233 [2024-12-06 13:12:11.645380] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1915766 ] 00:04:25.233 [2024-12-06 13:12:11.735753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.233 [2024-12-06 13:12:11.774482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.174 test_start 00:04:26.174 test_end 00:04:26.174 Performance: 535631 events per second 00:04:26.174 00:04:26.174 real 0m1.177s 00:04:26.174 user 0m1.088s 00:04:26.174 sys 0m0.085s 00:04:26.174 13:12:12 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.174 13:12:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:26.174 ************************************ 00:04:26.174 END TEST event_reactor_perf 00:04:26.174 ************************************ 00:04:26.435 13:12:12 event -- event/event.sh@49 -- # uname -s 00:04:26.435 13:12:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:26.436 13:12:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:26.436 13:12:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.436 13:12:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.436 13:12:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.436 ************************************ 00:04:26.436 START TEST event_scheduler 00:04:26.436 ************************************ 00:04:26.436 13:12:12 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:26.436 * Looking for test storage... 00:04:26.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:26.436 13:12:12 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:26.436 13:12:12 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:26.436 13:12:12 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:26.436 13:12:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.436 13:12:13 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:26.436 13:12:13 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.436 13:12:13 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.436 --rc genhtml_branch_coverage=1 00:04:26.436 --rc genhtml_function_coverage=1 00:04:26.436 --rc genhtml_legend=1 00:04:26.436 --rc geninfo_all_blocks=1 00:04:26.436 --rc geninfo_unexecuted_blocks=1 00:04:26.436 00:04:26.436 ' 00:04:26.436 13:12:13 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.436 --rc genhtml_branch_coverage=1 00:04:26.436 --rc genhtml_function_coverage=1 00:04:26.436 --rc genhtml_legend=1 00:04:26.436 --rc geninfo_all_blocks=1 00:04:26.436 --rc geninfo_unexecuted_blocks=1 00:04:26.436 00:04:26.436 ' 00:04:26.436 13:12:13 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.436 --rc genhtml_branch_coverage=1 00:04:26.436 --rc genhtml_function_coverage=1 00:04:26.436 --rc genhtml_legend=1 00:04:26.436 --rc geninfo_all_blocks=1 00:04:26.436 --rc geninfo_unexecuted_blocks=1 00:04:26.436 00:04:26.436 ' 00:04:26.436 13:12:13 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.436 --rc genhtml_branch_coverage=1 00:04:26.436 --rc genhtml_function_coverage=1 00:04:26.436 --rc genhtml_legend=1 00:04:26.436 --rc geninfo_all_blocks=1 00:04:26.436 --rc geninfo_unexecuted_blocks=1 00:04:26.436 00:04:26.436 ' 00:04:26.436 13:12:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:26.436 13:12:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1916158 00:04:26.436 13:12:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.436 13:12:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1916158 00:04:26.436 13:12:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:26.436 13:12:13 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1916158 ']' 00:04:26.436 13:12:13 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.436 13:12:13 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.436 13:12:13 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.436 13:12:13 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.436 13:12:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:26.697 [2024-12-06 13:12:13.134063] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:26.697 [2024-12-06 13:12:13.134114] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916158 ] 00:04:26.697 [2024-12-06 13:12:13.224590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:26.697 [2024-12-06 13:12:13.270520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.697 [2024-12-06 13:12:13.270681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.697 [2024-12-06 13:12:13.270841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:26.697 [2024-12-06 13:12:13.270842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:27.640 13:12:13 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.640 13:12:13 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:27.640 13:12:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:27.640 13:12:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.640 13:12:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:27.640 [2024-12-06 13:12:13.945232] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:27.640 [2024-12-06 13:12:13.945252] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:27.640 [2024-12-06 13:12:13.945263] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:27.640 [2024-12-06 13:12:13.945269] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:27.640 [2024-12-06 13:12:13.945275] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:27.640 13:12:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.640 13:12:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:27.640 13:12:13 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.640 13:12:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:27.640 [2024-12-06 13:12:14.009308] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:27.640 13:12:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.640 13:12:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:27.640 13:12:14 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.640 13:12:14 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.640 13:12:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:27.640 ************************************ 00:04:27.640 START TEST scheduler_create_thread 00:04:27.640 ************************************ 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.640 2 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.640 3 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.640 4 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.640 5 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.640 6 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.640 7 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.640 8 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.640 9 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:27.640 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.641 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.209 10 00:04:28.209 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.209 13:12:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:28.209 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.209 13:12:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.589 13:12:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.589 13:12:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:29.589 13:12:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:29.589 13:12:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.589 13:12:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.156 13:12:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.156 13:12:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:30.156 13:12:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.156 13:12:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.093 13:12:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.093 13:12:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:31.093 13:12:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:31.093 13:12:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.093 13:12:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.661 13:12:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.661 00:04:31.661 real 0m4.225s 00:04:31.661 user 0m0.023s 00:04:31.661 sys 0m0.009s 00:04:31.661 13:12:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.661 13:12:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.661 ************************************ 00:04:31.661 END TEST scheduler_create_thread 00:04:31.661 ************************************ 00:04:31.661 13:12:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:31.661 13:12:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1916158 00:04:31.661 13:12:18 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1916158 ']' 00:04:31.661 13:12:18 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1916158 00:04:31.661 13:12:18 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:31.921 13:12:18 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.921 13:12:18 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1916158 00:04:31.921 13:12:18 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:31.921 13:12:18 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:31.921 13:12:18 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1916158' 00:04:31.921 killing process with pid 1916158 00:04:31.921 13:12:18 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1916158 00:04:31.921 13:12:18 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1916158 00:04:32.183 [2024-12-06 13:12:18.655259] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:32.183 00:04:32.183 real 0m5.931s 00:04:32.183 user 0m13.857s 00:04:32.183 sys 0m0.414s 00:04:32.183 13:12:18 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.183 13:12:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:32.183 ************************************ 00:04:32.183 END TEST event_scheduler 00:04:32.183 ************************************ 00:04:32.444 13:12:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:32.444 13:12:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:32.444 13:12:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.444 13:12:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.444 13:12:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.444 ************************************ 00:04:32.444 START TEST app_repeat 00:04:32.444 ************************************ 00:04:32.444 13:12:18 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1917328 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1917328' 00:04:32.444 Process app_repeat pid: 1917328 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:32.444 spdk_app_start Round 0 00:04:32.444 13:12:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1917328 /var/tmp/spdk-nbd.sock 00:04:32.444 13:12:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1917328 ']' 00:04:32.444 13:12:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:32.444 13:12:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.444 13:12:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:32.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:32.444 13:12:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.444 13:12:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:32.444 [2024-12-06 13:12:18.938271] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:32.444 [2024-12-06 13:12:18.938338] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917328 ] 00:04:32.444 [2024-12-06 13:12:19.024559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.444 [2024-12-06 13:12:19.056948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.444 [2024-12-06 13:12:19.056949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.705 13:12:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.705 13:12:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:32.705 13:12:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.705 Malloc0 00:04:32.705 13:12:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.965 Malloc1 00:04:32.965 13:12:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.965 13:12:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:33.226 /dev/nbd0 00:04:33.226 13:12:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:33.226 13:12:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:33.226 13:12:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:33.226 13:12:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:33.226 13:12:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:33.226 13:12:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:33.226 13:12:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:33.226 13:12:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:33.226 13:12:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:33.226 13:12:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:33.226 13:12:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.226 1+0 records in 00:04:33.226 1+0 records out 00:04:33.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269505 s, 15.2 MB/s 00:04:33.227 13:12:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.227 13:12:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:33.227 13:12:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.227 13:12:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:33.227 13:12:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:33.227 13:12:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.227 13:12:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.227 13:12:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:33.487 /dev/nbd1 00:04:33.487 13:12:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:33.487 13:12:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.487 1+0 records in 00:04:33.487 1+0 records out 00:04:33.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275935 s, 14.8 MB/s 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:33.487 13:12:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:33.487 13:12:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.487 13:12:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.488 13:12:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.488 13:12:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.488 13:12:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.488 13:12:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:33.488 { 00:04:33.488 "nbd_device": "/dev/nbd0", 00:04:33.488 "bdev_name": "Malloc0" 00:04:33.488 }, 00:04:33.488 { 00:04:33.488 "nbd_device": "/dev/nbd1", 00:04:33.488 "bdev_name": "Malloc1" 00:04:33.488 } 00:04:33.488 ]' 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:33.748 { 00:04:33.748 "nbd_device": "/dev/nbd0", 00:04:33.748 "bdev_name": "Malloc0" 00:04:33.748 }, 00:04:33.748 { 00:04:33.748 "nbd_device": "/dev/nbd1", 00:04:33.748 "bdev_name": "Malloc1" 00:04:33.748 } 00:04:33.748 ]' 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:33.748 /dev/nbd1' 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:33.748 /dev/nbd1' 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:33.748 256+0 records in 00:04:33.748 256+0 records out 00:04:33.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118416 s, 88.6 MB/s 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:33.748 256+0 records in 00:04:33.748 256+0 records out 00:04:33.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120843 s, 86.8 MB/s 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:33.748 256+0 records in 00:04:33.748 256+0 records out 00:04:33.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128788 s, 81.4 MB/s 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:33.748 13:12:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.009 13:12:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.270 13:12:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:34.270 13:12:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:34.270 13:12:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.270 13:12:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:34.270 13:12:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:34.270 13:12:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.270 13:12:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:34.270 13:12:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:34.270 13:12:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:34.270 13:12:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:34.270 13:12:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:34.270 13:12:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:34.270 13:12:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:34.530 13:12:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:34.530 [2024-12-06 13:12:21.140253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:34.530 [2024-12-06 13:12:21.169422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.530 [2024-12-06 13:12:21.169422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.790 [2024-12-06 13:12:21.198687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:34.790 [2024-12-06 13:12:21.198718] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:38.087 13:12:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:38.087 13:12:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:38.087 spdk_app_start Round 1 00:04:38.087 13:12:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1917328 /var/tmp/spdk-nbd.sock 00:04:38.087 13:12:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1917328 ']' 00:04:38.087 13:12:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.087 13:12:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.087 13:12:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.087 13:12:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.087 13:12:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.087 13:12:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.087 13:12:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:38.087 13:12:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.087 Malloc0 00:04:38.087 13:12:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.087 Malloc1 00:04:38.087 13:12:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.087 13:12:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:38.348 /dev/nbd0 00:04:38.348 13:12:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:38.348 13:12:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.348 1+0 records in 00:04:38.348 1+0 records out 00:04:38.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269615 s, 15.2 MB/s 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:38.348 13:12:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:38.348 13:12:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.348 13:12:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.348 13:12:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:38.608 /dev/nbd1 00:04:38.609 13:12:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:38.609 13:12:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:38.609 1+0 records in 00:04:38.609 1+0 records out 00:04:38.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276697 s, 14.8 MB/s 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:38.609 13:12:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:38.609 13:12:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:38.609 13:12:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.609 13:12:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.609 13:12:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.609 13:12:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.609 13:12:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:38.609 { 00:04:38.609 "nbd_device": "/dev/nbd0", 00:04:38.609 "bdev_name": "Malloc0" 00:04:38.609 }, 00:04:38.609 { 00:04:38.609 "nbd_device": "/dev/nbd1", 00:04:38.609 "bdev_name": "Malloc1" 00:04:38.609 } 00:04:38.609 ]' 00:04:38.609 13:12:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:38.609 { 00:04:38.609 "nbd_device": "/dev/nbd0", 00:04:38.609 "bdev_name": "Malloc0" 00:04:38.609 }, 00:04:38.609 { 00:04:38.609 "nbd_device": "/dev/nbd1", 00:04:38.609 "bdev_name": "Malloc1" 00:04:38.609 } 00:04:38.609 ]' 00:04:38.609 13:12:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:38.871 /dev/nbd1' 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:38.871 /dev/nbd1' 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:38.871 256+0 records in 00:04:38.871 256+0 records out 00:04:38.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127273 s, 82.4 MB/s 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:38.871 256+0 records in 00:04:38.871 256+0 records out 00:04:38.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121913 s, 86.0 MB/s 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:38.871 256+0 records in 00:04:38.871 256+0 records out 00:04:38.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127341 s, 82.3 MB/s 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.871 13:12:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.132 13:12:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.392 13:12:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:39.392 13:12:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:39.392 13:12:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.392 13:12:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:39.392 13:12:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:39.392 13:12:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.392 13:12:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:39.392 13:12:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:39.392 13:12:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:39.392 13:12:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:39.392 13:12:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:39.392 13:12:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:39.392 13:12:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:39.653 13:12:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:39.653 [2024-12-06 13:12:26.260699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.653 [2024-12-06 13:12:26.289738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.653 [2024-12-06 13:12:26.289739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.912 [2024-12-06 13:12:26.319286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:39.912 [2024-12-06 13:12:26.319317] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:43.212 13:12:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:43.212 13:12:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:43.212 spdk_app_start Round 2 00:04:43.212 13:12:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1917328 /var/tmp/spdk-nbd.sock 00:04:43.212 13:12:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1917328 ']' 00:04:43.212 13:12:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:43.212 13:12:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.212 13:12:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:43.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:43.212 13:12:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.212 13:12:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:43.212 13:12:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.212 13:12:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:43.212 13:12:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.212 Malloc0 00:04:43.212 13:12:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:43.212 Malloc1 00:04:43.212 13:12:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.212 13:12:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:43.472 /dev/nbd0 00:04:43.472 13:12:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:43.472 13:12:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.472 1+0 records in 00:04:43.472 1+0 records out 00:04:43.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314513 s, 13.0 MB/s 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:43.472 13:12:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:43.472 13:12:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.472 13:12:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.472 13:12:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:43.472 /dev/nbd1 00:04:43.733 13:12:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:43.733 13:12:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.733 1+0 records in 00:04:43.733 1+0 records out 00:04:43.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279094 s, 14.7 MB/s 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:43.733 13:12:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:43.733 13:12:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.733 13:12:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.733 13:12:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.733 13:12:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.733 13:12:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.733 13:12:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:43.733 { 00:04:43.733 "nbd_device": "/dev/nbd0", 00:04:43.733 "bdev_name": "Malloc0" 00:04:43.733 }, 00:04:43.733 { 00:04:43.733 "nbd_device": "/dev/nbd1", 00:04:43.733 "bdev_name": "Malloc1" 00:04:43.733 } 00:04:43.733 ]' 00:04:43.733 13:12:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:43.733 { 00:04:43.733 "nbd_device": "/dev/nbd0", 00:04:43.733 "bdev_name": "Malloc0" 00:04:43.733 }, 00:04:43.733 { 00:04:43.733 "nbd_device": "/dev/nbd1", 00:04:43.733 "bdev_name": "Malloc1" 00:04:43.733 } 00:04:43.733 ]' 00:04:43.734 13:12:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.734 13:12:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:43.734 /dev/nbd1' 00:04:43.734 13:12:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.734 13:12:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:43.734 /dev/nbd1' 00:04:43.734 13:12:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:43.734 13:12:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:43.734 13:12:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:43.994 256+0 records in 00:04:43.994 256+0 records out 00:04:43.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127479 s, 82.3 MB/s 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:43.994 256+0 records in 00:04:43.994 256+0 records out 00:04:43.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012085 s, 86.8 MB/s 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:43.994 256+0 records in 00:04:43.994 256+0 records out 00:04:43.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129885 s, 80.7 MB/s 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.994 13:12:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.255 13:12:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.516 13:12:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:44.516 13:12:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:44.516 13:12:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.516 13:12:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:44.516 13:12:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:44.516 13:12:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.516 13:12:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:44.516 13:12:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:44.516 13:12:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:44.516 13:12:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:44.516 13:12:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:44.516 13:12:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:44.516 13:12:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:44.777 13:12:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:44.777 [2024-12-06 13:12:31.357227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.777 [2024-12-06 13:12:31.386439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.777 [2024-12-06 13:12:31.386440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.777 [2024-12-06 13:12:31.415446] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:44.777 [2024-12-06 13:12:31.415478] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:48.079 13:12:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1917328 /var/tmp/spdk-nbd.sock 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1917328 ']' 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:48.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:48.079 13:12:34 event.app_repeat -- event/event.sh@39 -- # killprocess 1917328 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1917328 ']' 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1917328 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1917328 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1917328' 00:04:48.079 killing process with pid 1917328 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1917328 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1917328 00:04:48.079 spdk_app_start is called in Round 0. 00:04:48.079 Shutdown signal received, stop current app iteration 00:04:48.079 Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 reinitialization... 00:04:48.079 spdk_app_start is called in Round 1. 00:04:48.079 Shutdown signal received, stop current app iteration 00:04:48.079 Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 reinitialization... 00:04:48.079 spdk_app_start is called in Round 2. 00:04:48.079 Shutdown signal received, stop current app iteration 00:04:48.079 Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 reinitialization... 00:04:48.079 spdk_app_start is called in Round 3. 00:04:48.079 Shutdown signal received, stop current app iteration 00:04:48.079 13:12:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:48.079 13:12:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:48.079 00:04:48.079 real 0m15.719s 00:04:48.079 user 0m34.654s 00:04:48.079 sys 0m2.281s 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.079 13:12:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:48.079 ************************************ 00:04:48.079 END TEST app_repeat 00:04:48.079 ************************************ 00:04:48.079 13:12:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:48.079 13:12:34 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:48.079 13:12:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.080 13:12:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.080 13:12:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.080 ************************************ 00:04:48.080 START TEST cpu_locks 00:04:48.080 ************************************ 00:04:48.080 13:12:34 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:48.341 * Looking for test storage... 00:04:48.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:48.341 13:12:34 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.341 13:12:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.341 13:12:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.341 13:12:34 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.341 13:12:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:48.341 13:12:34 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.341 13:12:34 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.341 --rc genhtml_branch_coverage=1 00:04:48.341 --rc genhtml_function_coverage=1 00:04:48.341 --rc genhtml_legend=1 00:04:48.341 --rc geninfo_all_blocks=1 00:04:48.341 --rc geninfo_unexecuted_blocks=1 00:04:48.341 00:04:48.341 ' 00:04:48.341 13:12:34 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.341 --rc genhtml_branch_coverage=1 00:04:48.341 --rc genhtml_function_coverage=1 00:04:48.341 --rc genhtml_legend=1 00:04:48.341 --rc geninfo_all_blocks=1 00:04:48.341 --rc geninfo_unexecuted_blocks=1 00:04:48.341 00:04:48.341 ' 00:04:48.341 13:12:34 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.341 --rc genhtml_branch_coverage=1 00:04:48.341 --rc genhtml_function_coverage=1 00:04:48.341 --rc genhtml_legend=1 00:04:48.341 --rc geninfo_all_blocks=1 00:04:48.341 --rc geninfo_unexecuted_blocks=1 00:04:48.341 00:04:48.341 ' 00:04:48.341 13:12:34 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.341 --rc genhtml_branch_coverage=1 00:04:48.341 --rc genhtml_function_coverage=1 00:04:48.341 --rc genhtml_legend=1 00:04:48.341 --rc geninfo_all_blocks=1 00:04:48.341 --rc geninfo_unexecuted_blocks=1 00:04:48.341 00:04:48.341 ' 00:04:48.341 13:12:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:48.341 13:12:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:48.341 13:12:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:48.341 13:12:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:48.341 13:12:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.341 13:12:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.341 13:12:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.341 ************************************ 00:04:48.341 START TEST default_locks 00:04:48.341 ************************************ 00:04:48.341 13:12:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:48.341 13:12:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1920810 00:04:48.341 13:12:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1920810 00:04:48.341 13:12:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.341 13:12:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1920810 ']' 00:04:48.341 13:12:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.341 13:12:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.341 13:12:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.341 13:12:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.341 13:12:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.613 [2024-12-06 13:12:35.003070] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:48.613 [2024-12-06 13:12:35.003119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920810 ] 00:04:48.613 [2024-12-06 13:12:35.087026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.613 [2024-12-06 13:12:35.117919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.188 13:12:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.188 13:12:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:49.188 13:12:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1920810 00:04:49.188 13:12:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:49.188 13:12:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1920810 00:04:49.759 lslocks: write error 00:04:49.759 13:12:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1920810 00:04:49.759 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1920810 ']' 00:04:49.759 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1920810 00:04:49.759 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:49.759 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.759 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1920810 00:04:49.759 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.759 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.759 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1920810' 00:04:49.759 killing process with pid 1920810 00:04:49.759 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1920810 00:04:49.759 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1920810 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1920810 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1920810 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1920810 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1920810 ']' 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1920810) - No such process 00:04:50.020 ERROR: process (pid: 1920810) is no longer running 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:50.020 00:04:50.020 real 0m1.589s 00:04:50.020 user 0m1.716s 00:04:50.020 sys 0m0.558s 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.020 13:12:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.020 ************************************ 00:04:50.020 END TEST default_locks 00:04:50.020 ************************************ 00:04:50.020 13:12:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:50.020 13:12:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.020 13:12:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.020 13:12:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.020 ************************************ 00:04:50.020 START TEST default_locks_via_rpc 00:04:50.020 ************************************ 00:04:50.020 13:12:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:50.020 13:12:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1921173 00:04:50.020 13:12:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1921173 00:04:50.020 13:12:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.020 13:12:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1921173 ']' 00:04:50.020 13:12:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.020 13:12:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.020 13:12:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.020 13:12:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.020 13:12:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.020 [2024-12-06 13:12:36.661142] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:50.020 [2024-12-06 13:12:36.661203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921173 ] 00:04:50.281 [2024-12-06 13:12:36.745242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.281 [2024-12-06 13:12:36.780758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1921173 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1921173 00:04:50.851 13:12:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.420 13:12:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1921173 00:04:51.420 13:12:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1921173 ']' 00:04:51.420 13:12:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1921173 00:04:51.420 13:12:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:51.420 13:12:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.420 13:12:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1921173 00:04:51.420 13:12:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.420 13:12:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.420 13:12:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1921173' 00:04:51.420 killing process with pid 1921173 00:04:51.420 13:12:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1921173 00:04:51.420 13:12:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1921173 00:04:51.680 00:04:51.680 real 0m1.614s 00:04:51.680 user 0m1.740s 00:04:51.680 sys 0m0.562s 00:04:51.680 13:12:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.680 13:12:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.680 ************************************ 00:04:51.680 END TEST default_locks_via_rpc 00:04:51.680 ************************************ 00:04:51.680 13:12:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:51.680 13:12:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.680 13:12:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.680 13:12:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.680 ************************************ 00:04:51.680 START TEST non_locking_app_on_locked_coremask 00:04:51.680 ************************************ 00:04:51.680 13:12:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:51.680 13:12:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1921542 00:04:51.680 13:12:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1921542 /var/tmp/spdk.sock 00:04:51.680 13:12:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.680 13:12:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1921542 ']' 00:04:51.680 13:12:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.680 13:12:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.680 13:12:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.680 13:12:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.680 13:12:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.940 [2024-12-06 13:12:38.349444] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:51.940 [2024-12-06 13:12:38.349506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921542 ] 00:04:51.940 [2024-12-06 13:12:38.436300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.940 [2024-12-06 13:12:38.469808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.511 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.511 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:52.511 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:52.511 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1921605 00:04:52.511 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1921605 /var/tmp/spdk2.sock 00:04:52.511 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1921605 ']' 00:04:52.511 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.511 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.511 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.511 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.511 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.771 [2024-12-06 13:12:39.179362] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:52.771 [2024-12-06 13:12:39.179414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921605 ] 00:04:52.771 [2024-12-06 13:12:39.263712] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:52.771 [2024-12-06 13:12:39.263737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.771 [2024-12-06 13:12:39.326475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.341 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.341 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:53.341 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1921542 00:04:53.341 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1921542 00:04:53.341 13:12:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.601 lslocks: write error 00:04:53.601 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1921542 00:04:53.602 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1921542 ']' 00:04:53.602 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1921542 00:04:53.602 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:53.602 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.602 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1921542 00:04:53.862 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.862 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.862 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1921542' 00:04:53.862 killing process with pid 1921542 00:04:53.862 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1921542 00:04:53.862 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1921542 00:04:54.122 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1921605 00:04:54.122 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1921605 ']' 00:04:54.122 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1921605 00:04:54.122 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:54.122 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.122 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1921605 00:04:54.122 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.122 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.122 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1921605' 00:04:54.122 killing process with pid 1921605 00:04:54.122 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1921605 00:04:54.122 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1921605 00:04:54.383 00:04:54.383 real 0m2.629s 00:04:54.383 user 0m2.947s 00:04:54.383 sys 0m0.768s 00:04:54.383 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.383 13:12:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.383 ************************************ 00:04:54.383 END TEST non_locking_app_on_locked_coremask 00:04:54.383 ************************************ 00:04:54.383 13:12:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:54.383 13:12:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.383 13:12:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.383 13:12:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.383 ************************************ 00:04:54.383 START TEST locking_app_on_unlocked_coremask 00:04:54.383 ************************************ 00:04:54.383 13:12:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:54.383 13:12:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1922069 00:04:54.383 13:12:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1922069 /var/tmp/spdk.sock 00:04:54.384 13:12:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:54.384 13:12:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1922069 ']' 00:04:54.384 13:12:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.384 13:12:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.384 13:12:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.384 13:12:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.384 13:12:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.646 [2024-12-06 13:12:41.054311] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:54.646 [2024-12-06 13:12:41.054369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922069 ] 00:04:54.646 [2024-12-06 13:12:41.140072] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:54.646 [2024-12-06 13:12:41.140117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.646 [2024-12-06 13:12:41.179086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.216 13:12:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.216 13:12:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:55.216 13:12:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:55.216 13:12:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1922256 00:04:55.216 13:12:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1922256 /var/tmp/spdk2.sock 00:04:55.216 13:12:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1922256 ']' 00:04:55.216 13:12:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.216 13:12:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.216 13:12:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.216 13:12:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.216 13:12:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.476 [2024-12-06 13:12:41.900687] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:55.477 [2024-12-06 13:12:41.900740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922256 ] 00:04:55.477 [2024-12-06 13:12:41.988468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.477 [2024-12-06 13:12:42.050984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.047 13:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.047 13:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:56.047 13:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1922256 00:04:56.047 13:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1922256 00:04:56.047 13:12:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.991 lslocks: write error 00:04:56.991 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1922069 00:04:56.991 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1922069 ']' 00:04:56.991 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1922069 00:04:56.991 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:56.991 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.991 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1922069 00:04:56.991 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.991 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.991 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1922069' 00:04:56.991 killing process with pid 1922069 00:04:56.991 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1922069 00:04:56.991 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1922069 00:04:57.253 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1922256 00:04:57.253 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1922256 ']' 00:04:57.253 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1922256 00:04:57.253 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:57.253 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.253 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1922256 00:04:57.253 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.253 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.253 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1922256' 00:04:57.253 killing process with pid 1922256 00:04:57.253 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1922256 00:04:57.253 13:12:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1922256 00:04:57.514 00:04:57.514 real 0m3.025s 00:04:57.514 user 0m3.372s 00:04:57.514 sys 0m0.920s 00:04:57.514 13:12:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.514 13:12:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.514 ************************************ 00:04:57.514 END TEST locking_app_on_unlocked_coremask 00:04:57.514 ************************************ 00:04:57.514 13:12:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:57.514 13:12:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.514 13:12:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.514 13:12:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.514 ************************************ 00:04:57.514 START TEST locking_app_on_locked_coremask 00:04:57.514 ************************************ 00:04:57.514 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:57.514 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1922643 00:04:57.514 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1922643 /var/tmp/spdk.sock 00:04:57.514 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.514 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1922643 ']' 00:04:57.514 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.514 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.514 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.514 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.514 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.514 [2024-12-06 13:12:44.153935] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:57.514 [2024-12-06 13:12:44.153989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922643 ] 00:04:57.774 [2024-12-06 13:12:44.243322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.774 [2024-12-06 13:12:44.279897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.343 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.343 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:58.343 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1922967 00:04:58.343 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1922967 /var/tmp/spdk2.sock 00:04:58.343 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:58.343 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:58.343 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1922967 /var/tmp/spdk2.sock 00:04:58.343 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:58.343 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.343 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:58.343 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.343 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1922967 /var/tmp/spdk2.sock 00:04:58.343 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1922967 ']' 00:04:58.344 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.344 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.344 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.344 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.344 13:12:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.603 [2024-12-06 13:12:45.011625] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:58.603 [2024-12-06 13:12:45.011679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922967 ] 00:04:58.603 [2024-12-06 13:12:45.099269] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1922643 has claimed it. 00:04:58.603 [2024-12-06 13:12:45.099303] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:59.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1922967) - No such process 00:04:59.172 ERROR: process (pid: 1922967) is no longer running 00:04:59.172 13:12:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.172 13:12:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:59.172 13:12:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:59.172 13:12:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.172 13:12:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.172 13:12:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.172 13:12:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1922643 00:04:59.172 13:12:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1922643 00:04:59.172 13:12:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.431 lslocks: write error 00:04:59.431 13:12:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1922643 00:04:59.431 13:12:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1922643 ']' 00:04:59.431 13:12:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1922643 00:04:59.431 13:12:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:59.431 13:12:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.431 13:12:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1922643 00:04:59.691 13:12:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.691 13:12:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.691 13:12:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1922643' 00:04:59.691 killing process with pid 1922643 00:04:59.691 13:12:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1922643 00:04:59.691 13:12:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1922643 00:04:59.691 00:04:59.691 real 0m2.184s 00:04:59.691 user 0m2.456s 00:04:59.691 sys 0m0.638s 00:04:59.691 13:12:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.691 13:12:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.691 ************************************ 00:04:59.691 END TEST locking_app_on_locked_coremask 00:04:59.691 ************************************ 00:04:59.691 13:12:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:59.691 13:12:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.691 13:12:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.691 13:12:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.951 ************************************ 00:04:59.951 START TEST locking_overlapped_coremask 00:04:59.951 ************************************ 00:04:59.951 13:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:59.951 13:12:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1923269 00:04:59.951 13:12:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1923269 /var/tmp/spdk.sock 00:04:59.951 13:12:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:59.951 13:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1923269 ']' 00:04:59.951 13:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.951 13:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.951 13:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.951 13:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.951 13:12:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.951 [2024-12-06 13:12:46.414707] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:04:59.951 [2024-12-06 13:12:46.414766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923269 ] 00:04:59.951 [2024-12-06 13:12:46.503902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.951 [2024-12-06 13:12:46.545165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.951 [2024-12-06 13:12:46.545317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.951 [2024-12-06 13:12:46.545318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1923352 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1923352 /var/tmp/spdk2.sock 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1923352 /var/tmp/spdk2.sock 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1923352 /var/tmp/spdk2.sock 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1923352 ']' 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.892 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.892 [2024-12-06 13:12:47.268131] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:00.892 [2024-12-06 13:12:47.268185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923352 ] 00:05:00.892 [2024-12-06 13:12:47.380631] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1923269 has claimed it. 00:05:00.892 [2024-12-06 13:12:47.380672] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:01.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1923352) - No such process 00:05:01.465 ERROR: process (pid: 1923352) is no longer running 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1923269 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1923269 ']' 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1923269 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1923269 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1923269' 00:05:01.465 killing process with pid 1923269 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1923269 00:05:01.465 13:12:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1923269 00:05:01.727 00:05:01.727 real 0m1.792s 00:05:01.727 user 0m5.171s 00:05:01.727 sys 0m0.388s 00:05:01.727 13:12:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.727 13:12:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.727 ************************************ 00:05:01.727 END TEST locking_overlapped_coremask 00:05:01.727 ************************************ 00:05:01.727 13:12:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:01.727 13:12:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.727 13:12:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.727 13:12:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.727 ************************************ 00:05:01.727 START TEST locking_overlapped_coremask_via_rpc 00:05:01.727 ************************************ 00:05:01.727 13:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:01.727 13:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1923711 00:05:01.727 13:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1923711 /var/tmp/spdk.sock 00:05:01.727 13:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:01.727 13:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1923711 ']' 00:05:01.727 13:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.727 13:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.727 13:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.727 13:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.727 13:12:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.727 [2024-12-06 13:12:48.287523] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:01.727 [2024-12-06 13:12:48.287584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923711 ] 00:05:01.727 [2024-12-06 13:12:48.374136] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.727 [2024-12-06 13:12:48.374164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.989 [2024-12-06 13:12:48.410989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.989 [2024-12-06 13:12:48.411142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.989 [2024-12-06 13:12:48.411143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.559 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.559 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:02.559 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1923722 00:05:02.559 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1923722 /var/tmp/spdk2.sock 00:05:02.559 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:02.560 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1923722 ']' 00:05:02.560 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.560 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.560 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.560 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.560 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.560 [2024-12-06 13:12:49.129027] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:02.560 [2024-12-06 13:12:49.129081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923722 ] 00:05:02.819 [2024-12-06 13:12:49.240440] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.819 [2024-12-06 13:12:49.240474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.819 [2024-12-06 13:12:49.318487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.819 [2024-12-06 13:12:49.321578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.819 [2024-12-06 13:12:49.321578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.392 [2024-12-06 13:12:49.929532] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1923711 has claimed it. 00:05:03.392 request: 00:05:03.392 { 00:05:03.392 "method": "framework_enable_cpumask_locks", 00:05:03.392 "req_id": 1 00:05:03.392 } 00:05:03.392 Got JSON-RPC error response 00:05:03.392 response: 00:05:03.392 { 00:05:03.392 "code": -32603, 00:05:03.392 "message": "Failed to claim CPU core: 2" 00:05:03.392 } 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1923711 /var/tmp/spdk.sock 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1923711 ']' 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.392 13:12:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1923722 /var/tmp/spdk2.sock 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1923722 ']' 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:03.653 00:05:03.653 real 0m2.075s 00:05:03.653 user 0m0.856s 00:05:03.653 sys 0m0.150s 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.653 13:12:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.653 ************************************ 00:05:03.653 END TEST locking_overlapped_coremask_via_rpc 00:05:03.653 ************************************ 00:05:03.913 13:12:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:03.913 13:12:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1923711 ]] 00:05:03.913 13:12:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1923711 00:05:03.913 13:12:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1923711 ']' 00:05:03.913 13:12:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1923711 00:05:03.913 13:12:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:03.913 13:12:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.913 13:12:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1923711 00:05:03.914 13:12:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.914 13:12:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.914 13:12:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1923711' 00:05:03.914 killing process with pid 1923711 00:05:03.914 13:12:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1923711 00:05:03.914 13:12:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1923711 00:05:04.174 13:12:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1923722 ]] 00:05:04.174 13:12:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1923722 00:05:04.174 13:12:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1923722 ']' 00:05:04.174 13:12:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1923722 00:05:04.174 13:12:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:04.174 13:12:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.174 13:12:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1923722 00:05:04.174 13:12:50 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:04.174 13:12:50 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:04.174 13:12:50 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1923722' 00:05:04.174 killing process with pid 1923722 00:05:04.174 13:12:50 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1923722 00:05:04.174 13:12:50 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1923722 00:05:04.435 13:12:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:04.435 13:12:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:04.435 13:12:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1923711 ]] 00:05:04.435 13:12:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1923711 00:05:04.435 13:12:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1923711 ']' 00:05:04.436 13:12:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1923711 00:05:04.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1923711) - No such process 00:05:04.436 13:12:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1923711 is not found' 00:05:04.436 Process with pid 1923711 is not found 00:05:04.436 13:12:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1923722 ]] 00:05:04.436 13:12:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1923722 00:05:04.436 13:12:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1923722 ']' 00:05:04.436 13:12:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1923722 00:05:04.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1923722) - No such process 00:05:04.436 13:12:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1923722 is not found' 00:05:04.436 Process with pid 1923722 is not found 00:05:04.436 13:12:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:04.436 00:05:04.436 real 0m16.165s 00:05:04.436 user 0m28.232s 00:05:04.436 sys 0m4.926s 00:05:04.436 13:12:50 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.436 13:12:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.436 ************************************ 00:05:04.436 END TEST cpu_locks 00:05:04.436 ************************************ 00:05:04.436 00:05:04.436 real 0m42.042s 00:05:04.436 user 1m23.307s 00:05:04.436 sys 0m8.307s 00:05:04.436 13:12:50 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.436 13:12:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.436 ************************************ 00:05:04.436 END TEST event 00:05:04.436 ************************************ 00:05:04.436 13:12:50 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:04.436 13:12:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.436 13:12:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.436 13:12:50 -- common/autotest_common.sh@10 -- # set +x 00:05:04.436 ************************************ 00:05:04.436 START TEST thread 00:05:04.436 ************************************ 00:05:04.436 13:12:50 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:04.436 * Looking for test storage... 00:05:04.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:04.436 13:12:51 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.436 13:12:51 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.436 13:12:51 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.696 13:12:51 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.696 13:12:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.696 13:12:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.696 13:12:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.696 13:12:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.696 13:12:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.696 13:12:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.696 13:12:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.696 13:12:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.696 13:12:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.696 13:12:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.696 13:12:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.696 13:12:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:04.697 13:12:51 thread -- scripts/common.sh@345 -- # : 1 00:05:04.697 13:12:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.697 13:12:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.697 13:12:51 thread -- scripts/common.sh@365 -- # decimal 1 00:05:04.697 13:12:51 thread -- scripts/common.sh@353 -- # local d=1 00:05:04.697 13:12:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.697 13:12:51 thread -- scripts/common.sh@355 -- # echo 1 00:05:04.697 13:12:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.697 13:12:51 thread -- scripts/common.sh@366 -- # decimal 2 00:05:04.697 13:12:51 thread -- scripts/common.sh@353 -- # local d=2 00:05:04.697 13:12:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.697 13:12:51 thread -- scripts/common.sh@355 -- # echo 2 00:05:04.697 13:12:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.697 13:12:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.697 13:12:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.697 13:12:51 thread -- scripts/common.sh@368 -- # return 0 00:05:04.697 13:12:51 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.697 13:12:51 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.697 --rc genhtml_branch_coverage=1 00:05:04.697 --rc genhtml_function_coverage=1 00:05:04.697 --rc genhtml_legend=1 00:05:04.697 --rc geninfo_all_blocks=1 00:05:04.697 --rc geninfo_unexecuted_blocks=1 00:05:04.697 00:05:04.697 ' 00:05:04.697 13:12:51 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.697 --rc genhtml_branch_coverage=1 00:05:04.697 --rc genhtml_function_coverage=1 00:05:04.697 --rc genhtml_legend=1 00:05:04.697 --rc geninfo_all_blocks=1 00:05:04.697 --rc geninfo_unexecuted_blocks=1 00:05:04.697 00:05:04.697 ' 00:05:04.697 13:12:51 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.697 --rc genhtml_branch_coverage=1 00:05:04.697 --rc genhtml_function_coverage=1 00:05:04.697 --rc genhtml_legend=1 00:05:04.697 --rc geninfo_all_blocks=1 00:05:04.697 --rc geninfo_unexecuted_blocks=1 00:05:04.697 00:05:04.697 ' 00:05:04.697 13:12:51 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.697 --rc genhtml_branch_coverage=1 00:05:04.697 --rc genhtml_function_coverage=1 00:05:04.697 --rc genhtml_legend=1 00:05:04.697 --rc geninfo_all_blocks=1 00:05:04.697 --rc geninfo_unexecuted_blocks=1 00:05:04.697 00:05:04.697 ' 00:05:04.697 13:12:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:04.697 13:12:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:04.697 13:12:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.697 13:12:51 thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.697 ************************************ 00:05:04.697 START TEST thread_poller_perf 00:05:04.697 ************************************ 00:05:04.697 13:12:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:04.697 [2024-12-06 13:12:51.236808] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:04.697 [2024-12-06 13:12:51.236916] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924315 ] 00:05:04.697 [2024-12-06 13:12:51.328747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.957 [2024-12-06 13:12:51.369221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.957 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:06.052 [2024-12-06T12:12:52.711Z] ====================================== 00:05:06.052 [2024-12-06T12:12:52.711Z] busy:2407175680 (cyc) 00:05:06.052 [2024-12-06T12:12:52.711Z] total_run_count: 419000 00:05:06.052 [2024-12-06T12:12:52.711Z] tsc_hz: 2400000000 (cyc) 00:05:06.052 [2024-12-06T12:12:52.711Z] ====================================== 00:05:06.052 [2024-12-06T12:12:52.711Z] poller_cost: 5745 (cyc), 2393 (nsec) 00:05:06.052 00:05:06.052 real 0m1.188s 00:05:06.052 user 0m1.091s 00:05:06.052 sys 0m0.093s 00:05:06.052 13:12:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.052 13:12:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.052 ************************************ 00:05:06.052 END TEST thread_poller_perf 00:05:06.052 ************************************ 00:05:06.052 13:12:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:06.052 13:12:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:06.052 13:12:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.052 13:12:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.052 ************************************ 00:05:06.052 START TEST thread_poller_perf 00:05:06.052 ************************************ 00:05:06.052 13:12:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:06.052 [2024-12-06 13:12:52.500350] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:06.052 [2024-12-06 13:12:52.500446] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924531 ] 00:05:06.052 [2024-12-06 13:12:52.592078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.052 [2024-12-06 13:12:52.622439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.052 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:06.995 [2024-12-06T12:12:53.654Z] ====================================== 00:05:06.995 [2024-12-06T12:12:53.654Z] busy:2401569062 (cyc) 00:05:06.995 [2024-12-06T12:12:53.654Z] total_run_count: 5104000 00:05:06.995 [2024-12-06T12:12:53.654Z] tsc_hz: 2400000000 (cyc) 00:05:06.995 [2024-12-06T12:12:53.654Z] ====================================== 00:05:06.995 [2024-12-06T12:12:53.654Z] poller_cost: 470 (cyc), 195 (nsec) 00:05:06.995 00:05:06.995 real 0m1.171s 00:05:06.995 user 0m1.088s 00:05:06.995 sys 0m0.079s 00:05:06.995 13:12:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.995 13:12:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.995 ************************************ 00:05:06.995 END TEST thread_poller_perf 00:05:06.995 ************************************ 00:05:07.256 13:12:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:07.256 00:05:07.256 real 0m2.715s 00:05:07.256 user 0m2.349s 00:05:07.256 sys 0m0.381s 00:05:07.256 13:12:53 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.256 13:12:53 thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.256 ************************************ 00:05:07.256 END TEST thread 00:05:07.256 ************************************ 00:05:07.256 13:12:53 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:07.256 13:12:53 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:07.256 13:12:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.256 13:12:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.256 13:12:53 -- common/autotest_common.sh@10 -- # set +x 00:05:07.256 ************************************ 00:05:07.256 START TEST app_cmdline 00:05:07.256 ************************************ 00:05:07.256 13:12:53 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:07.256 * Looking for test storage... 00:05:07.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:07.256 13:12:53 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.256 13:12:53 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.256 13:12:53 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.517 13:12:53 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.518 13:12:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:07.518 13:12:53 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.518 13:12:53 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.518 --rc genhtml_branch_coverage=1 00:05:07.518 --rc genhtml_function_coverage=1 00:05:07.518 --rc genhtml_legend=1 00:05:07.518 --rc geninfo_all_blocks=1 00:05:07.518 --rc geninfo_unexecuted_blocks=1 00:05:07.518 00:05:07.518 ' 00:05:07.518 13:12:53 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.518 --rc genhtml_branch_coverage=1 00:05:07.518 --rc genhtml_function_coverage=1 00:05:07.518 --rc genhtml_legend=1 00:05:07.518 --rc geninfo_all_blocks=1 00:05:07.518 --rc geninfo_unexecuted_blocks=1 00:05:07.518 00:05:07.518 ' 00:05:07.518 13:12:53 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.518 --rc genhtml_branch_coverage=1 00:05:07.518 --rc genhtml_function_coverage=1 00:05:07.518 --rc genhtml_legend=1 00:05:07.518 --rc geninfo_all_blocks=1 00:05:07.518 --rc geninfo_unexecuted_blocks=1 00:05:07.518 00:05:07.518 ' 00:05:07.518 13:12:53 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.518 --rc genhtml_branch_coverage=1 00:05:07.518 --rc genhtml_function_coverage=1 00:05:07.518 --rc genhtml_legend=1 00:05:07.518 --rc geninfo_all_blocks=1 00:05:07.518 --rc geninfo_unexecuted_blocks=1 00:05:07.518 00:05:07.518 ' 00:05:07.518 13:12:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:07.518 13:12:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1924928 00:05:07.518 13:12:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1924928 00:05:07.518 13:12:53 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:07.518 13:12:53 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1924928 ']' 00:05:07.518 13:12:53 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.518 13:12:53 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.518 13:12:53 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.518 13:12:53 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.518 13:12:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:07.518 [2024-12-06 13:12:54.013983] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:07.518 [2024-12-06 13:12:54.014042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924928 ] 00:05:07.518 [2024-12-06 13:12:54.092535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.518 [2024-12-06 13:12:54.123419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.459 13:12:54 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.459 13:12:54 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:08.459 13:12:54 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:08.459 { 00:05:08.459 "version": "SPDK v25.01-pre git sha1 b82e5bf03", 00:05:08.459 "fields": { 00:05:08.459 "major": 25, 00:05:08.459 "minor": 1, 00:05:08.459 "patch": 0, 00:05:08.459 "suffix": "-pre", 00:05:08.459 "commit": "b82e5bf03" 00:05:08.459 } 00:05:08.459 } 00:05:08.459 13:12:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:08.459 13:12:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:08.459 13:12:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:08.459 13:12:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:08.459 13:12:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:08.459 13:12:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:08.459 13:12:54 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.459 13:12:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:08.459 13:12:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:08.459 13:12:54 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.459 13:12:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:08.459 13:12:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:08.459 13:12:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.459 13:12:55 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:08.459 13:12:55 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.459 13:12:55 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.459 13:12:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.459 13:12:55 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.459 13:12:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.459 13:12:55 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.459 13:12:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.459 13:12:55 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:08.459 13:12:55 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:08.459 13:12:55 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.719 request: 00:05:08.719 { 00:05:08.719 "method": "env_dpdk_get_mem_stats", 00:05:08.719 "req_id": 1 00:05:08.719 } 00:05:08.719 Got JSON-RPC error response 00:05:08.719 response: 00:05:08.719 { 00:05:08.719 "code": -32601, 00:05:08.719 "message": "Method not found" 00:05:08.719 } 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.719 13:12:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1924928 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1924928 ']' 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1924928 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1924928 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1924928' 00:05:08.719 killing process with pid 1924928 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@973 -- # kill 1924928 00:05:08.719 13:12:55 app_cmdline -- common/autotest_common.sh@978 -- # wait 1924928 00:05:08.979 00:05:08.979 real 0m1.660s 00:05:08.979 user 0m1.998s 00:05:08.979 sys 0m0.426s 00:05:08.979 13:12:55 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.979 13:12:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:08.979 ************************************ 00:05:08.979 END TEST app_cmdline 00:05:08.979 ************************************ 00:05:08.979 13:12:55 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:08.979 13:12:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.979 13:12:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.979 13:12:55 -- common/autotest_common.sh@10 -- # set +x 00:05:08.979 ************************************ 00:05:08.979 START TEST version 00:05:08.979 ************************************ 00:05:08.979 13:12:55 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:08.979 * Looking for test storage... 00:05:08.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:08.979 13:12:55 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.979 13:12:55 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.979 13:12:55 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.241 13:12:55 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.241 13:12:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.241 13:12:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.241 13:12:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.241 13:12:55 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.241 13:12:55 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.241 13:12:55 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.241 13:12:55 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.241 13:12:55 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.241 13:12:55 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.241 13:12:55 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.241 13:12:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.241 13:12:55 version -- scripts/common.sh@344 -- # case "$op" in 00:05:09.241 13:12:55 version -- scripts/common.sh@345 -- # : 1 00:05:09.241 13:12:55 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.241 13:12:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.241 13:12:55 version -- scripts/common.sh@365 -- # decimal 1 00:05:09.241 13:12:55 version -- scripts/common.sh@353 -- # local d=1 00:05:09.241 13:12:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.241 13:12:55 version -- scripts/common.sh@355 -- # echo 1 00:05:09.241 13:12:55 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.241 13:12:55 version -- scripts/common.sh@366 -- # decimal 2 00:05:09.241 13:12:55 version -- scripts/common.sh@353 -- # local d=2 00:05:09.241 13:12:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.241 13:12:55 version -- scripts/common.sh@355 -- # echo 2 00:05:09.241 13:12:55 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.241 13:12:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.241 13:12:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.241 13:12:55 version -- scripts/common.sh@368 -- # return 0 00:05:09.241 13:12:55 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.241 13:12:55 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.241 --rc genhtml_branch_coverage=1 00:05:09.241 --rc genhtml_function_coverage=1 00:05:09.241 --rc genhtml_legend=1 00:05:09.241 --rc geninfo_all_blocks=1 00:05:09.241 --rc geninfo_unexecuted_blocks=1 00:05:09.241 00:05:09.241 ' 00:05:09.241 13:12:55 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.241 --rc genhtml_branch_coverage=1 00:05:09.241 --rc genhtml_function_coverage=1 00:05:09.241 --rc genhtml_legend=1 00:05:09.241 --rc geninfo_all_blocks=1 00:05:09.241 --rc geninfo_unexecuted_blocks=1 00:05:09.241 00:05:09.241 ' 00:05:09.241 13:12:55 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.241 --rc genhtml_branch_coverage=1 00:05:09.241 --rc genhtml_function_coverage=1 00:05:09.241 --rc genhtml_legend=1 00:05:09.241 --rc geninfo_all_blocks=1 00:05:09.241 --rc geninfo_unexecuted_blocks=1 00:05:09.241 00:05:09.241 ' 00:05:09.241 13:12:55 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.241 --rc genhtml_branch_coverage=1 00:05:09.241 --rc genhtml_function_coverage=1 00:05:09.241 --rc genhtml_legend=1 00:05:09.241 --rc geninfo_all_blocks=1 00:05:09.241 --rc geninfo_unexecuted_blocks=1 00:05:09.241 00:05:09.241 ' 00:05:09.241 13:12:55 version -- app/version.sh@17 -- # get_header_version major 00:05:09.241 13:12:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:09.241 13:12:55 version -- app/version.sh@14 -- # cut -f2 00:05:09.241 13:12:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:09.241 13:12:55 version -- app/version.sh@17 -- # major=25 00:05:09.241 13:12:55 version -- app/version.sh@18 -- # get_header_version minor 00:05:09.241 13:12:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:09.242 13:12:55 version -- app/version.sh@14 -- # cut -f2 00:05:09.242 13:12:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:09.242 13:12:55 version -- app/version.sh@18 -- # minor=1 00:05:09.242 13:12:55 version -- app/version.sh@19 -- # get_header_version patch 00:05:09.242 13:12:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:09.242 13:12:55 version -- app/version.sh@14 -- # cut -f2 00:05:09.242 13:12:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:09.242 13:12:55 version -- app/version.sh@19 -- # patch=0 00:05:09.242 13:12:55 version -- app/version.sh@20 -- # get_header_version suffix 00:05:09.242 13:12:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:09.242 13:12:55 version -- app/version.sh@14 -- # cut -f2 00:05:09.242 13:12:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:09.242 13:12:55 version -- app/version.sh@20 -- # suffix=-pre 00:05:09.242 13:12:55 version -- app/version.sh@22 -- # version=25.1 00:05:09.242 13:12:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:09.242 13:12:55 version -- app/version.sh@28 -- # version=25.1rc0 00:05:09.242 13:12:55 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:09.242 13:12:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:09.242 13:12:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:09.242 13:12:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:09.242 00:05:09.242 real 0m0.286s 00:05:09.242 user 0m0.168s 00:05:09.242 sys 0m0.165s 00:05:09.242 13:12:55 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.242 13:12:55 version -- common/autotest_common.sh@10 -- # set +x 00:05:09.242 ************************************ 00:05:09.242 END TEST version 00:05:09.242 ************************************ 00:05:09.242 13:12:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:09.242 13:12:55 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:09.242 13:12:55 -- spdk/autotest.sh@194 -- # uname -s 00:05:09.242 13:12:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:09.242 13:12:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:09.242 13:12:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:09.242 13:12:55 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:09.242 13:12:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:09.242 13:12:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:09.242 13:12:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.242 13:12:55 -- common/autotest_common.sh@10 -- # set +x 00:05:09.242 13:12:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:09.242 13:12:55 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:09.242 13:12:55 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:09.242 13:12:55 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:09.242 13:12:55 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:09.242 13:12:55 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:09.242 13:12:55 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:09.242 13:12:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:09.242 13:12:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.242 13:12:55 -- common/autotest_common.sh@10 -- # set +x 00:05:09.504 ************************************ 00:05:09.504 START TEST nvmf_tcp 00:05:09.504 ************************************ 00:05:09.504 13:12:55 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:09.504 * Looking for test storage... 00:05:09.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:09.504 13:12:56 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.504 13:12:56 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.504 13:12:56 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.504 13:12:56 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.504 13:12:56 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:09.504 13:12:56 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.504 13:12:56 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.504 --rc genhtml_branch_coverage=1 00:05:09.504 --rc genhtml_function_coverage=1 00:05:09.504 --rc genhtml_legend=1 00:05:09.504 --rc geninfo_all_blocks=1 00:05:09.504 --rc geninfo_unexecuted_blocks=1 00:05:09.504 00:05:09.504 ' 00:05:09.504 13:12:56 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.504 --rc genhtml_branch_coverage=1 00:05:09.504 --rc genhtml_function_coverage=1 00:05:09.504 --rc genhtml_legend=1 00:05:09.504 --rc geninfo_all_blocks=1 00:05:09.504 --rc geninfo_unexecuted_blocks=1 00:05:09.504 00:05:09.504 ' 00:05:09.504 13:12:56 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.504 --rc genhtml_branch_coverage=1 00:05:09.504 --rc genhtml_function_coverage=1 00:05:09.504 --rc genhtml_legend=1 00:05:09.504 --rc geninfo_all_blocks=1 00:05:09.504 --rc geninfo_unexecuted_blocks=1 00:05:09.504 00:05:09.504 ' 00:05:09.504 13:12:56 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.504 --rc genhtml_branch_coverage=1 00:05:09.504 --rc genhtml_function_coverage=1 00:05:09.504 --rc genhtml_legend=1 00:05:09.504 --rc geninfo_all_blocks=1 00:05:09.504 --rc geninfo_unexecuted_blocks=1 00:05:09.504 00:05:09.504 ' 00:05:09.504 13:12:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:09.504 13:12:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:09.504 13:12:56 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:09.504 13:12:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:09.504 13:12:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.504 13:12:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.504 ************************************ 00:05:09.504 START TEST nvmf_target_core 00:05:09.504 ************************************ 00:05:09.504 13:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:09.766 * Looking for test storage... 00:05:09.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.766 --rc genhtml_branch_coverage=1 00:05:09.766 --rc genhtml_function_coverage=1 00:05:09.766 --rc genhtml_legend=1 00:05:09.766 --rc geninfo_all_blocks=1 00:05:09.766 --rc geninfo_unexecuted_blocks=1 00:05:09.766 00:05:09.766 ' 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.766 --rc genhtml_branch_coverage=1 00:05:09.766 --rc genhtml_function_coverage=1 00:05:09.766 --rc genhtml_legend=1 00:05:09.766 --rc geninfo_all_blocks=1 00:05:09.766 --rc geninfo_unexecuted_blocks=1 00:05:09.766 00:05:09.766 ' 00:05:09.766 13:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.766 --rc genhtml_branch_coverage=1 00:05:09.766 --rc genhtml_function_coverage=1 00:05:09.766 --rc genhtml_legend=1 00:05:09.766 --rc geninfo_all_blocks=1 00:05:09.767 --rc geninfo_unexecuted_blocks=1 00:05:09.767 00:05:09.767 ' 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.767 --rc genhtml_branch_coverage=1 00:05:09.767 --rc genhtml_function_coverage=1 00:05:09.767 --rc genhtml_legend=1 00:05:09.767 --rc geninfo_all_blocks=1 00:05:09.767 --rc geninfo_unexecuted_blocks=1 00:05:09.767 00:05:09.767 ' 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.767 13:12:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:10.028 ************************************ 00:05:10.028 START TEST nvmf_abort 00:05:10.028 ************************************ 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:10.028 * Looking for test storage... 00:05:10.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.028 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.028 --rc genhtml_branch_coverage=1 00:05:10.028 --rc genhtml_function_coverage=1 00:05:10.029 --rc genhtml_legend=1 00:05:10.029 --rc geninfo_all_blocks=1 00:05:10.029 --rc geninfo_unexecuted_blocks=1 00:05:10.029 00:05:10.029 ' 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.029 --rc genhtml_branch_coverage=1 00:05:10.029 --rc genhtml_function_coverage=1 00:05:10.029 --rc genhtml_legend=1 00:05:10.029 --rc geninfo_all_blocks=1 00:05:10.029 --rc geninfo_unexecuted_blocks=1 00:05:10.029 00:05:10.029 ' 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.029 --rc genhtml_branch_coverage=1 00:05:10.029 --rc genhtml_function_coverage=1 00:05:10.029 --rc genhtml_legend=1 00:05:10.029 --rc geninfo_all_blocks=1 00:05:10.029 --rc geninfo_unexecuted_blocks=1 00:05:10.029 00:05:10.029 ' 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.029 --rc genhtml_branch_coverage=1 00:05:10.029 --rc genhtml_function_coverage=1 00:05:10.029 --rc genhtml_legend=1 00:05:10.029 --rc geninfo_all_blocks=1 00:05:10.029 --rc geninfo_unexecuted_blocks=1 00:05:10.029 00:05:10.029 ' 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.029 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:10.290 13:12:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:18.430 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:18.431 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:18.431 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:18.431 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:18.431 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:18.431 13:13:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:18.431 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:18.431 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:18.431 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:18.431 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:18.431 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:18.431 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:18.431 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:18.431 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:18.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:18.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:05:18.431 00:05:18.431 --- 10.0.0.2 ping statistics --- 00:05:18.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:18.431 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:05:18.431 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:18.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:18.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:05:18.431 00:05:18.431 --- 10.0.0.1 ping statistics --- 00:05:18.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:18.431 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:05:18.431 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1929423 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1929423 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1929423 ']' 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.432 13:13:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.432 [2024-12-06 13:13:04.271289] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:18.432 [2024-12-06 13:13:04.271353] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:18.432 [2024-12-06 13:13:04.373693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:18.432 [2024-12-06 13:13:04.428272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:18.432 [2024-12-06 13:13:04.428328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:18.432 [2024-12-06 13:13:04.428336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:18.432 [2024-12-06 13:13:04.428344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:18.432 [2024-12-06 13:13:04.428350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:18.432 [2024-12-06 13:13:04.430209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.432 [2024-12-06 13:13:04.430370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.432 [2024-12-06 13:13:04.430371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.692 [2024-12-06 13:13:05.154932] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.692 Malloc0 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.692 Delay0 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.692 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.693 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:18.693 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.693 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.693 [2024-12-06 13:13:05.246982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:18.693 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.693 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:18.693 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.693 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.693 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.693 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:18.953 [2024-12-06 13:13:05.387641] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:20.863 Initializing NVMe Controllers 00:05:20.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:20.863 controller IO queue size 128 less than required 00:05:20.863 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:20.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:20.863 Initialization complete. Launching workers. 00:05:20.863 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28536 00:05:20.863 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28597, failed to submit 62 00:05:20.863 success 28540, unsuccessful 57, failed 0 00:05:20.863 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:20.863 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.863 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:21.123 rmmod nvme_tcp 00:05:21.123 rmmod nvme_fabrics 00:05:21.123 rmmod nvme_keyring 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1929423 ']' 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1929423 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1929423 ']' 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1929423 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1929423 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1929423' 00:05:21.123 killing process with pid 1929423 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1929423 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1929423 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:21.123 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:23.669 13:13:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:23.669 00:05:23.669 real 0m13.402s 00:05:23.669 user 0m14.058s 00:05:23.669 sys 0m6.616s 00:05:23.669 13:13:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.669 13:13:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:23.669 ************************************ 00:05:23.669 END TEST nvmf_abort 00:05:23.669 ************************************ 00:05:23.669 13:13:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:23.669 13:13:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:23.669 13:13:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.669 13:13:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:23.669 ************************************ 00:05:23.669 START TEST nvmf_ns_hotplug_stress 00:05:23.669 ************************************ 00:05:23.669 13:13:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:23.669 * Looking for test storage... 00:05:23.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.669 --rc genhtml_branch_coverage=1 00:05:23.669 --rc genhtml_function_coverage=1 00:05:23.669 --rc genhtml_legend=1 00:05:23.669 --rc geninfo_all_blocks=1 00:05:23.669 --rc geninfo_unexecuted_blocks=1 00:05:23.669 00:05:23.669 ' 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.669 --rc genhtml_branch_coverage=1 00:05:23.669 --rc genhtml_function_coverage=1 00:05:23.669 --rc genhtml_legend=1 00:05:23.669 --rc geninfo_all_blocks=1 00:05:23.669 --rc geninfo_unexecuted_blocks=1 00:05:23.669 00:05:23.669 ' 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.669 --rc genhtml_branch_coverage=1 00:05:23.669 --rc genhtml_function_coverage=1 00:05:23.669 --rc genhtml_legend=1 00:05:23.669 --rc geninfo_all_blocks=1 00:05:23.669 --rc geninfo_unexecuted_blocks=1 00:05:23.669 00:05:23.669 ' 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.669 --rc genhtml_branch_coverage=1 00:05:23.669 --rc genhtml_function_coverage=1 00:05:23.669 --rc genhtml_legend=1 00:05:23.669 --rc geninfo_all_blocks=1 00:05:23.669 --rc geninfo_unexecuted_blocks=1 00:05:23.669 00:05:23.669 ' 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.669 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:23.670 13:13:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:31.808 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:31.809 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:31.809 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:31.809 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:31.809 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:31.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:31.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:05:31.809 00:05:31.809 --- 10.0.0.2 ping statistics --- 00:05:31.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:31.809 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:31.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:31.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:05:31.809 00:05:31.809 --- 10.0.0.1 ping statistics --- 00:05:31.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:31.809 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1934431 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1934431 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1934431 ']' 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.809 13:13:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:31.809 [2024-12-06 13:13:17.709988] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:05:31.809 [2024-12-06 13:13:17.710054] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:31.809 [2024-12-06 13:13:17.812630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.809 [2024-12-06 13:13:17.865298] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:31.809 [2024-12-06 13:13:17.865351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:31.809 [2024-12-06 13:13:17.865360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:31.810 [2024-12-06 13:13:17.865368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:31.810 [2024-12-06 13:13:17.865374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:31.810 [2024-12-06 13:13:17.867507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.810 [2024-12-06 13:13:17.867671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.810 [2024-12-06 13:13:17.867671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.070 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.070 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:32.070 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:32.070 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.070 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:32.070 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:32.070 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:32.070 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:32.331 [2024-12-06 13:13:18.745067] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.331 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:32.331 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:32.590 [2024-12-06 13:13:19.148173] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:32.591 13:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:32.850 13:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:33.111 Malloc0 00:05:33.111 13:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:33.372 Delay0 00:05:33.372 13:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.372 13:13:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:33.632 NULL1 00:05:33.632 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:33.893 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:33.893 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1934845 00:05:33.893 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:33.893 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.837 Read completed with error (sct=0, sc=11) 00:05:34.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.837 13:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.098 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.098 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.098 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.098 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.098 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.098 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.098 13:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:35.098 13:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:35.359 true 00:05:35.359 13:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:35.359 13:13:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.301 13:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.301 13:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:36.301 13:13:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:36.562 true 00:05:36.562 13:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:36.562 13:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.502 13:13:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.502 13:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:37.502 13:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:37.764 true 00:05:37.764 13:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:37.764 13:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.025 13:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.025 13:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:38.025 13:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:38.285 true 00:05:38.285 13:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:38.285 13:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.546 13:13:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.546 13:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:38.546 13:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:38.806 true 00:05:38.806 13:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:38.806 13:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.066 13:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.066 13:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:39.066 13:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:39.326 true 00:05:39.326 13:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:39.326 13:13:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.586 13:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.586 13:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:39.586 13:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:39.845 true 00:05:39.845 13:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:39.845 13:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.104 13:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.104 13:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:40.104 13:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:40.365 true 00:05:40.365 13:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:40.365 13:13:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.626 13:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.887 13:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:40.887 13:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:40.887 true 00:05:40.887 13:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:40.887 13:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.147 13:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.407 13:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:41.407 13:13:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:41.407 true 00:05:41.407 13:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:41.407 13:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.668 13:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.959 [2024-12-06 13:13:28.379834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.379885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.379912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.379940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.379968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.379991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.380985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.381023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.381054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.381090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.381120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.381165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.381193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.381258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.381288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.381325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.381353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.959 [2024-12-06 13:13:28.381382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.381989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.382741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.383987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.960 [2024-12-06 13:13:28.384986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.385998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.386999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.387997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.388029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.388058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.388089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.388118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.388147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.388180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.388211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.961 [2024-12-06 13:13:28.388254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.388990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.389595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.390994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.962 [2024-12-06 13:13:28.391530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.391559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.391589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.391621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.391644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.391676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.391705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.391736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.391776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.391811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.391854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.391889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.392986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.393986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.963 [2024-12-06 13:13:28.394931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.394959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.394987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.395954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.396987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.964 [2024-12-06 13:13:28.397781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.397809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.397838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.397868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.397896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.397923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.397953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.397980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.398989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.399690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.400990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.401018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.401055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.401085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.401114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.401152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.401188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.401224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.401253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.401280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.401308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.401336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.401365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.965 [2024-12-06 13:13:28.401392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.401985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.402991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:41.966 [2024-12-06 13:13:28.403604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.403990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.966 [2024-12-06 13:13:28.404705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.404734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.404763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.404789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.404816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.404845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.404873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.404902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.404930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.404957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.404982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.405977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.406985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.407999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.408027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.408056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.408082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.408113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.408143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.408173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.408200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.967 [2024-12-06 13:13:28.408230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.408974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.409997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 13:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:41.968 [2024-12-06 13:13:28.410658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.968 [2024-12-06 13:13:28.410880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.410908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.410937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.410965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.410995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 13:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:41.969 [2024-12-06 13:13:28.411306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.411979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.412993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.969 [2024-12-06 13:13:28.413971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.413999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.414960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.415990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.416986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.970 [2024-12-06 13:13:28.417593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.417622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.417653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.417682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.417716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.417746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.417776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.417805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.418998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.419975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.420981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.421011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.421040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.421076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.971 [2024-12-06 13:13:28.421103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.421994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.422975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.423969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.972 [2024-12-06 13:13:28.424623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.424670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.424699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.425973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.426978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.973 [2024-12-06 13:13:28.427835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.427873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.427902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.427930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.427965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.427992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.428968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.429996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.430986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.974 [2024-12-06 13:13:28.431566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.431600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.431631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.431660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.431718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.431748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.431777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.431808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.431839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.431885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.432984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.433998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.434985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.435013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.435045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.435079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.435118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.435148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.975 [2024-12-06 13:13:28.435178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.435993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.436975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.437979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.976 [2024-12-06 13:13:28.438587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:41.977 [2024-12-06 13:13:28.438616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.438645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.438676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.438708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.438739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.439998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.440979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.977 [2024-12-06 13:13:28.441007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.441987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.442993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.443979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.444009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.444044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.444071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.444102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.444132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.444161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.444190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.444219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.444251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.444279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.444407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.444434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.978 [2024-12-06 13:13:28.444467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.444985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.445993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.446996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.447231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.447267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.447294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.447320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.447348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.447376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.979 [2024-12-06 13:13:28.447403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.447992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.448996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.449815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.980 [2024-12-06 13:13:28.450851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.450874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.450904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.450934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.450963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.450997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.451996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.452986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.453677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.454063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.454094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.454123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.981 [2024-12-06 13:13:28.454151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.454986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.455898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.456023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.456052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.456079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.456112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.456139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.456172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.456201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.456228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.456257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.456287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.456325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.456362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.982 [2024-12-06 13:13:28.456392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.456423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.456453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.456485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.456515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.456926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.456958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.456991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.457998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.458985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.983 [2024-12-06 13:13:28.459809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.459839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.459867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.459895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.459929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.459959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.459991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.460949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.461971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.462960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.463015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.463045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.463100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.463128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.463179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.463539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.984 [2024-12-06 13:13:28.463575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.463603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.463630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.463657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.463685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.463712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.463740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.463769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.463804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.463836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.463865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.463895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.463927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.463969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.464971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.465965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.985 [2024-12-06 13:13:28.466538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.466976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.467686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.468996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.986 [2024-12-06 13:13:28.469645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.469675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.469708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.469738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.469765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.469794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.470977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.471985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.987 [2024-12-06 13:13:28.472973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.473998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:41.988 [2024-12-06 13:13:28.474858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.474978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.475960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.476007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.476037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.476065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.476095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.476128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.476157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.476187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.476216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.476246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.476274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.476309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.988 [2024-12-06 13:13:28.476341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.476767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.477973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.478975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.479911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.480189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.480224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.989 [2024-12-06 13:13:28.480256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.480976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.481995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.482972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.990 [2024-12-06 13:13:28.483524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.483979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.484979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.485976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.486594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.487195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.487234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.487264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.991 [2024-12-06 13:13:28.487295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.487978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.488992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.489999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.490026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.490053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.490083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.490110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.490140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.490170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.992 [2024-12-06 13:13:28.490204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.490985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.491983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.492976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.993 [2024-12-06 13:13:28.493589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.493621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.494977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.495976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.496852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.497147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.497181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.497211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.497243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.497272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.994 [2024-12-06 13:13:28.497305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.497983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.498992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.499987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.500011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.500043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.500074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.500106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.500145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.500174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.500201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.500229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.500256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.500292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.500326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.500357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.995 [2024-12-06 13:13:28.500387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.500412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.500441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.500473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.500508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.500535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.500563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.500592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.500636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.501989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.502986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.996 [2024-12-06 13:13:28.503682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.503713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.503766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.503800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.503832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.503864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.503890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.504985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.505999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.997 [2024-12-06 13:13:28.506780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.506816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.506845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.506876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.506905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.506933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.506964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.506995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.507994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.508993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.509981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.998 [2024-12-06 13:13:28.510558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.510587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.510615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.510644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.510672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.510701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:41.999 [2024-12-06 13:13:28.510730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.510761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.510795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.510824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.510854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.510887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.510915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.510943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.510973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.511978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.512974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.513004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.513033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.513659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.513693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.513724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.513754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.513781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.513814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.513847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.513876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.513906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:41.999 [2024-12-06 13:13:28.513947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.513977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.514992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.515992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.516974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.000 [2024-12-06 13:13:28.517001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.517990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.518971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.001 [2024-12-06 13:13:28.519857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.519885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.519914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.519945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.520985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.521994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.522994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.002 [2024-12-06 13:13:28.523026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.523994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.524987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.525971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.003 [2024-12-06 13:13:28.526886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.526913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.526943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.526973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.527993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.528891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.529980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.530015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.004 [2024-12-06 13:13:28.530040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.530990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.531997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.532893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.005 [2024-12-06 13:13:28.533758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.533787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.533814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.533848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.533879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.533916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.533942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.533971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.534974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.535711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.536993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.006 [2024-12-06 13:13:28.537886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.537916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.537950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.537980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.538994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.539980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.540981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.541995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.007 [2024-12-06 13:13:28.542962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.542991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.543990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.544746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.545988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.546975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:42.008 [2024-12-06 13:13:28.547623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.547973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.548011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.548039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.548068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.548095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.008 [2024-12-06 13:13:28.548128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.548985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.549982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.550980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.551978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.552009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.552038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.552069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.552098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.552128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.552158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.552187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.552214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.552247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.552274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.009 [2024-12-06 13:13:28.552304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.552973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.553979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.554989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.555970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.556932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.557278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.557311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.557344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.557376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.557407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.557438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.557471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.557502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.557531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.557564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.010 [2024-12-06 13:13:28.557595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.557626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.557656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.557686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.557715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.557746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.557776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.557811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.557843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.557874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.557910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.557942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.557972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.558997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.559975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.560985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.561613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.011 [2024-12-06 13:13:28.562994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.563979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.564993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.565970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.566991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.012 [2024-12-06 13:13:28.567607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.567637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.567664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.567707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.567735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.567766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.567796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.567828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.567864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.567893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.567923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.567952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.567983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 true 00:05:42.013 [2024-12-06 13:13:28.568280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.568976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.569997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.570984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.571812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.013 [2024-12-06 13:13:28.572501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.572995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.573982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.574999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.575967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.576984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.014 [2024-12-06 13:13:28.577418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.577983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.578983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.579957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.580949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.581993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.582024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.015 [2024-12-06 13:13:28.582056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.582980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:42.016 [2024-12-06 13:13:28.583664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.583979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.584999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.585605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.016 [2024-12-06 13:13:28.586925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.586957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.587982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.588807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.305 [2024-12-06 13:13:28.589567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.589991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.590995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.591985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.306 [2024-12-06 13:13:28.592739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.592770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.592802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.592831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.592861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.592893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.592933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.592966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.592996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.593996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.594969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.595472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 13:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:42.307 13:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.307 [2024-12-06 13:13:28.596126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.596159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.596193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.596222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.596254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.596285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.596317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.596365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.596396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.596424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.596452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.596486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.307 [2024-12-06 13:13:28.596519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.596976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.597999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.598994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.308 [2024-12-06 13:13:28.599762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.599787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.599816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.599845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.599879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.599910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.599944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.599976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.600989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.601969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.309 [2024-12-06 13:13:28.602674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.602711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.602849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.602878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.602906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.602935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.602966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.603983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.604851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.605996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.606024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.606051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.606081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.606112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.606141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.606169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.606199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.606229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.606259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.606293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.606326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.606359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.310 [2024-12-06 13:13:28.606388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.606978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.607967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.608978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.609994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.311 [2024-12-06 13:13:28.610026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.610990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.611850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.612969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.613001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.613047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.613076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.613108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.613140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.613171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.613207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.613236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.613265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.613314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.613345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.312 [2024-12-06 13:13:28.613376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.613408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.613438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.613473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.613505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.613562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.613592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.613623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.613653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.613684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.613714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.613740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.614984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.615965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.616091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.616119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.616162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.616191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.616221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.616252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.616282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.616311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.616340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.616373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.616405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.616442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.616483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.617158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.617184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.617218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.617250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.617277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.617306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.617336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.617366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.617411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.617442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.313 [2024-12-06 13:13:28.617483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.617980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.618977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:42.314 [2024-12-06 13:13:28.619648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.619972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.620002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.620034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.620068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.620097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.620127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.314 [2024-12-06 13:13:28.620158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.620990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.621985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.622921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.623062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.623090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.623120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.623150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.623179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.623210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.623243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.623270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.623299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.623329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.623390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.315 [2024-12-06 13:13:28.623420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.623453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.623950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.623977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.624998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.625996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.316 [2024-12-06 13:13:28.626891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.626919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.626944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.626970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.626993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.627999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.628997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.629992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.630021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.630050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.630526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.630559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.630585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.630612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.317 [2024-12-06 13:13:28.630642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.630699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.630732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.630766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.630795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.630823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.630851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.630881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.630906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.630934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.630964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.630996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.631985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.632972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.633004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.633032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.633059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.633088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.633122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.633150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.633483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.633516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.633546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.633582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.633609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.318 [2024-12-06 13:13:28.633646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.633675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.633718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.633745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.633775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.633802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.633829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.633860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.633921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.633950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.633981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.634986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.635996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.319 [2024-12-06 13:13:28.636844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.636874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.636905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.636933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.636960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.636989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.637609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.638983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.639972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.320 [2024-12-06 13:13:28.640630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.640660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.640690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.640718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.640747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.640777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.640806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.640842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.640870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.640899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.640925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.640954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.640986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.641992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.642018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.642049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.642169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.642195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.642223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.642843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.642875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.642902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.642930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.642959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.642986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.643974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.644002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.644028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.644055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.644083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.644109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.644137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.644168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.644191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.321 [2024-12-06 13:13:28.644221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.644990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.645998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.646959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.647996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.648024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.648053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.648081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.322 [2024-12-06 13:13:28.648109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.648987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.649981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.323 [2024-12-06 13:13:28.650852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.650884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.650915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.650949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.650980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.651763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.652997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.653976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.324 [2024-12-06 13:13:28.654583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.654610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.654638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.654666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.654696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.654769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.654799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.654828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.654855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.654882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.654909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.654938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.654964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:42.325 [2024-12-06 13:13:28.655078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.655988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.656978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.325 [2024-12-06 13:13:28.657993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.658974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.659973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.660998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.326 [2024-12-06 13:13:28.661697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.661822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.661852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.661881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.661978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.662991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.663850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.664995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.327 [2024-12-06 13:13:28.665027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.665985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.666984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.667986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.668017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.668046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.668078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.668106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.668133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.668160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.668190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.668218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.668248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.668279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.328 [2024-12-06 13:13:28.668309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.668984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.669977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.670904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.329 [2024-12-06 13:13:28.671913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.671944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.671973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.672995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.673787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.674987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.330 [2024-12-06 13:13:28.675567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.675596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.675632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.675664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.675695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.675727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.675755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.675796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.675825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.675856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.675887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.675918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.675947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.675976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.676996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.677972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.678973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.679004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.679032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.331 [2024-12-06 13:13:28.679062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.679996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.680734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.681971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.332 [2024-12-06 13:13:28.682000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.682992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.683993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.684979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.685008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.685039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.685067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.685104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.685133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.685164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.685192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.685235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.685265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.685297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.685325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.686222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.333 [2024-12-06 13:13:28.686257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.686976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.687993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.688971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.334 [2024-12-06 13:13:28.689617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.689644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.689675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.689703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.689728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.689759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.689791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.689818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.689846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.689878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.689908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.689940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.689969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.690987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.691967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.692825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:42.335 [2024-12-06 13:13:28.693519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.693587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.693617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.693649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.693678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.693706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.693735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.335 [2024-12-06 13:13:28.693766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.693801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.693831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.693861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.693894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.693928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.693957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.693988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.694969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.695985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.696998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.697028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.697057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.697086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.697112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.697144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.336 [2024-12-06 13:13:28.697172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.697978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.698970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.699990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.700404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.337 [2024-12-06 13:13:28.700436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.700992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.701979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.702992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.338 [2024-12-06 13:13:28.703987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.704977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.705963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.706995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.707032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.707061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.707092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.707122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.707541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.707573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.707601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.707667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.707700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.339 [2024-12-06 13:13:28.707728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.707757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.707791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.707821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.707850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.707880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.707909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.707939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.707983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.708993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.709975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.710988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.711018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.711044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.711075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.711104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.711146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.711177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.711207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.711236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.711271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.711299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.711329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.340 [2024-12-06 13:13:28.711358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.711982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.712987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.713978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.714005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.714036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.714069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.714097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.341 [2024-12-06 13:13:28.714127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.714975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.715991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.716973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.717984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.718016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.342 [2024-12-06 13:13:28.718048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.718962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.719810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.720982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.721014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.721046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.721077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.721108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.721150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.721182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.721213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.721246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.721277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.721309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.343 [2024-12-06 13:13:28.721338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.721974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.722986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.723990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.344 [2024-12-06 13:13:28.724759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.724798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.724829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.724861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.724900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.724930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.724961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.724994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.725977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.726755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.727976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.345 [2024-12-06 13:13:28.728393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.728977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:42.346 [2024-12-06 13:13:28.729927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.729991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.346 [2024-12-06 13:13:28.730886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.730917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.730946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.730977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.731979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.732975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.733706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.347 [2024-12-06 13:13:28.734551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.734582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.734610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.734665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.734697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.734728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.734760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.734793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.734832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.734863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.734892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.734920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.734948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.734989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.735972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.736989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.737976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.738015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.738045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.738075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.738111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.738144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.738176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.738201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.348 [2024-12-06 13:13:28.738233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.738978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.739974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.349 [2024-12-06 13:13:28.740788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.740819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.740850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.740880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.741980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.742995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.743023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.743052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.743079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.743110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.743143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.743174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.743210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.743945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.743978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.744009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.744037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.744067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.744097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.744132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.744164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.744193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.744223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.744253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.350 [2024-12-06 13:13:28.744287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.744999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.745964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.746995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.747024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.747053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.747084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.747119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.747149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.747179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.747209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.351 [2024-12-06 13:13:28.747236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.747991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.748793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.749988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.352 [2024-12-06 13:13:28.750477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.750976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.751009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.751038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.353 13:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.353 [2024-12-06 13:13:28.927423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.353 [2024-12-06 13:13:28.927612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.927997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.928986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.353 [2024-12-06 13:13:28.929801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.929830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.929858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.929885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.929912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.929942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.929971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.930973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.931621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.932990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.933021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.933055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.933087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.933117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.933145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.933174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.933206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.933237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.933269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.354 [2024-12-06 13:13:28.933298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.933945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.934587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.935005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.935036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.935064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.355 [2024-12-06 13:13:28.935107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.935996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.936996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.937127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.937161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.643 [2024-12-06 13:13:28.937190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.937616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.938989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.644 [2024-12-06 13:13:28.939752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.939782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.939810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.939839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.939870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.939901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.940986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.645 [2024-12-06 13:13:28.941926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.941955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.941981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.942012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.942041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.942764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.942795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.942825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.942852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.942880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.942907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.942943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.942978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.943981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.646 [2024-12-06 13:13:28.944651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.944681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.944821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.944848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.944876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.944904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.944933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.944962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.944991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.945981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.647 [2024-12-06 13:13:28.946655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.946683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.946712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.947972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.948903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.648 [2024-12-06 13:13:28.949462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.949977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.950978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.951015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.951051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.951088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.649 [2024-12-06 13:13:28.951125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.951980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.952992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.953988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.954017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.650 [2024-12-06 13:13:28.954048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.954989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.955960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.956994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.957024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.957059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.957088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.957137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.957164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.957204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.957236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.957270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.957299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.957328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.957358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.957398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.651 [2024-12-06 13:13:28.957430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.957992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.958995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 13:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:42.652 [2024-12-06 13:13:28.959388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 13:13:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:42.652 [2024-12-06 13:13:28.959678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.959969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.652 [2024-12-06 13:13:28.960727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.960759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.960793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.960823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.960858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.960889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.960918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.960952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.960983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.961982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.962954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:42.653 [2024-12-06 13:13:28.963594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.963628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.963666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.963697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.963727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.963758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.963787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.963818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.963850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.963882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.963912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.963943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.963973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.653 [2024-12-06 13:13:28.964520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.964980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.965981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.966986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.967040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.967069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.967099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.967134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.967163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.967214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.967244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.967275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.654 [2024-12-06 13:13:28.967307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.967990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.968980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.655 [2024-12-06 13:13:28.969577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.969602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.969633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.969663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.969692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.969719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.969746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.969776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.969813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.969842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.969870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.969901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.969932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.969961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.970993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.656 [2024-12-06 13:13:28.971971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.972979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.973977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.657 [2024-12-06 13:13:28.974588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.974619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.974652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.974685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.974717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.974748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.974779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.974809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.974842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.974874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.974903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.974932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.974962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.974992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.975973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.658 [2024-12-06 13:13:28.976825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.976855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.976890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.976919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.976947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.976980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.977698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.978969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.979000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.979031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.979060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.979089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.979117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.979147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.979178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.979209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.979235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.659 [2024-12-06 13:13:28.979268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.979942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.980972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.660 [2024-12-06 13:13:28.981994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.982992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.983976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.984995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.985023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.985053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.985095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.661 [2024-12-06 13:13:28.985128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.985997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.986962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.987973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.662 [2024-12-06 13:13:28.988754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.988784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.988814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.988845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.988875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.988904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.988932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.988962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.988992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.989977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.990981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.991987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.992018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.992048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.992077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.992111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.992145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.992175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.992224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.992254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.663 [2024-12-06 13:13:28.992283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.992981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.993963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.994615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.664 [2024-12-06 13:13:28.995840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.995871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.995900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.995932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.995962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.995990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.996952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.997975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.665 [2024-12-06 13:13:28.998506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.998890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.998921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.998953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.998982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:28.999992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.000906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:42.666 [2024-12-06 13:13:29.001551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.001981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.666 [2024-12-06 13:13:29.002488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.002985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.003978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.004974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.005991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.006019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.667 [2024-12-06 13:13:29.006046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.006974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.007870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.008519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.668 [2024-12-06 13:13:29.009703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.009733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.009763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.009788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.009817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.009849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.009880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.009908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.009948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.009978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.010985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.011974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.012996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.013028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.013061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.013090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.013120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.013185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.013214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.669 [2024-12-06 13:13:29.013245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.013998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.014921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.015577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.670 [2024-12-06 13:13:29.016461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.016992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.017973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.018970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.019594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.020183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.020216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.020250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.671 [2024-12-06 13:13:29.020281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.020985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.021986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.022847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.672 [2024-12-06 13:13:29.023780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.023811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.023842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.023875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.023906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.023939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.023970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.024996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.025984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.026632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.027086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.027118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.027148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.027189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.027221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.027253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.027283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.027314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.673 [2024-12-06 13:13:29.027351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.027971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.028991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.029728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.674 [2024-12-06 13:13:29.030611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.030638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.030668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.030701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.030733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.030770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.030799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.030823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.030854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.030884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.030918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.030947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.030977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.031982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.032992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.675 [2024-12-06 13:13:29.033912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.033944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.033973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.034983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.035987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.036893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:42.676 [2024-12-06 13:13:29.037798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.676 [2024-12-06 13:13:29.037856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.037887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.037923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.037962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.037998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.038970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.039978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.040986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.041015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.041052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.041080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.041110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.041142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.041170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.041228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.041260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.041289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.677 [2024-12-06 13:13:29.041321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.041976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.042700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.043977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.678 [2024-12-06 13:13:29.044654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.044682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.044710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.044738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.044784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.044816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.044846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.044878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.044907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.044939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.044967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.044999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.045978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.046974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.679 [2024-12-06 13:13:29.047916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.047949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.047979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.048982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.049999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.050988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.680 [2024-12-06 13:13:29.051677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.051713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.051749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.051779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.051805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.051833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.051859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.051888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.051927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.051964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.051992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.052980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.053983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.054987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.055342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.055372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.055402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.055437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.055470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.055496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.055527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.055567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.055597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.055628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.681 [2024-12-06 13:13:29.055660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.055688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.055714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.055746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.055775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.055806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.055835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.055864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.055899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.055928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.055955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.055984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.056978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.057974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.058991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.059021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.059052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.059083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.059117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.059152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.059186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.059218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.059250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.682 [2024-12-06 13:13:29.059283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.059314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.059346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.059373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.059404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.059433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.059473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.059504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.059536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.059566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.059595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.059939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.059976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.060974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.061887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.062993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.063025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.063059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.063088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.683 [2024-12-06 13:13:29.063118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.063986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.064018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.064050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.064080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.064112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.064141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.064171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.064836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.064873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.064902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.064937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.064968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.064997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.065989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.066975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.684 [2024-12-06 13:13:29.067471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.067975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.068951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.069978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.070996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.685 [2024-12-06 13:13:29.071875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.071904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.071942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.071973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.072983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.073696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:42.686 [2024-12-06 13:13:29.074251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.074982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.686 [2024-12-06 13:13:29.075987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.076830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.077972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.078995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.687 [2024-12-06 13:13:29.079756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.079786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.079820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.079849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.079877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.079912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.079942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.079968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.079997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.080997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.081972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.082972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.083737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.084392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.084425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.084459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.084488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.084518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.084548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.084575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.084603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.688 [2024-12-06 13:13:29.084633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.084662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.084687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.084712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.084737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.084762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.084786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.084810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.084836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.084860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.084887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.084919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.084949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.084978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.085985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.086987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.087979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.689 [2024-12-06 13:13:29.088989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.089976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.090774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.690 [2024-12-06 13:13:29.091548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.091578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.091611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.091641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.091677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.091704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.091735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.091784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.091816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.091845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.091872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.091904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.091955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.091985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.092975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.093986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.691 [2024-12-06 13:13:29.094574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.094604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.094643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.094676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.094706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.094736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.094765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.094798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.094828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.094863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.094889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.094922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.094954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.094985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.095987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.096985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.097873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.098205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.098238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.098279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.098306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.692 [2024-12-06 13:13:29.098337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.098988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099167] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.099984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.100991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.693 [2024-12-06 13:13:29.101763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.101793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.101826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.101856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.101887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.101920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.101950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.101982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.102987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.103972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.104746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.105102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.105134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.105164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.105194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.105228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.105259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.105288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.105324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.105359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.105390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.694 [2024-12-06 13:13:29.105420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.105998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.106990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.107025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.107054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.107084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.107758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.107792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.107824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.107852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.107886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.107916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.107945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.107981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.108977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.109008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.109051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.109083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.109119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.109148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.109181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.695 [2024-12-06 13:13:29.109232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.109790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.110974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.111972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.112335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.112367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:42.696 [2024-12-06 13:13:29.112397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.696 [2024-12-06 13:13:29.112428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.112982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.113983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.114989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.697 [2024-12-06 13:13:29.115943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.115973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.116744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.117994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.118997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.698 [2024-12-06 13:13:29.119777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.119807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.119864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.119895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.119926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.119957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.119988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 true 00:05:42.699 [2024-12-06 13:13:29.120720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.120986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.121985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.122980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.123004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.123034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.123068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.123097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.123134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.123163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.123195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.123223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.123252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.123285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.123315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.123344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.699 [2024-12-06 13:13:29.123373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.123956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.124977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.125971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.700 [2024-12-06 13:13:29.126704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.126733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.126762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.126796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.126825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.126853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.126889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.126920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.126945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.126976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.127990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.128993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.129980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.130011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.130045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.130081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.130110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.130138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.130168] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.130197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.130229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.130260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.130288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.701 [2024-12-06 13:13:29.130633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.130664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.130696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.130726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.130756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.130792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.130828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.130857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.130887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.130916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.130945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.130975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.131997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:42.702 [2024-12-06 13:13:29.132149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.702 [2024-12-06 13:13:29.132556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.132971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.702 [2024-12-06 13:13:29.133716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.133753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.133784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.133815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.133842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.133871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.133908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.133937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.133987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.134977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.135995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.136973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.703 [2024-12-06 13:13:29.137430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.137875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.138934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.139971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.140971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.141001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.141030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.141068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.141100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.141130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.704 [2024-12-06 13:13:29.141162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.141981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.142609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.143970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.705 [2024-12-06 13:13:29.144722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.144755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.144788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.144817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.144847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.144876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.144905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.144937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.144972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.145987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.146978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.147976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.148010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.148044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.148073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.148106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.706 [2024-12-06 13:13:29.148137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.148979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:42.707 [2024-12-06 13:13:29.149162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.149808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.150975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.707 [2024-12-06 13:13:29.151674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.151713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.151742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.151772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.151801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.151831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.151867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.151897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.151932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.151963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.151992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.152977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.153992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.154998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.155034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.155066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.155099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.155129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.155156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.155186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.155403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.708 [2024-12-06 13:13:29.155432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.155968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.156981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.157870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.709 [2024-12-06 13:13:29.158725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.158754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.158785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.158817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.158842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.158872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.158903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.158932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.158962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.158992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.159972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.160985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.710 [2024-12-06 13:13:29.161989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.162988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.163969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.164994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.165024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.165061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.165509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.165542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.165568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.165603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.165637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.165667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.165697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.165727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.165761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.165797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.711 [2024-12-06 13:13:29.165830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.165859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.165887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.165924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.165951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.165983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.166993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.167978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.168998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.169030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.169059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.169087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.169117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.712 [2024-12-06 13:13:29.169181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.169751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.170976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.171991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.172023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.172053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.172089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.172217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.172250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.172280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.172309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.172338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.713 [2024-12-06 13:13:29.172368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.172399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.172431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.172463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.172500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.172531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.172560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.173993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.174974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.175004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.175135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.175166] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.175194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.175231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.175261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.175287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.175319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.175349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.175392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.175422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.714 [2024-12-06 13:13:29.175449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.175970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.176972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.177977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.715 [2024-12-06 13:13:29.178808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.178839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.178870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.178899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.178930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.178962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.178992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.179976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.180988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.181657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.716 [2024-12-06 13:13:29.182422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.182988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.183974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.184970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.185989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.186020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.717 [2024-12-06 13:13:29.186053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:05:42.718 [2024-12-06 13:13:29.186748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.186978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.187983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:42.718 [2024-12-06 13:13:29.188829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:05:43.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.664 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.925 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:43.925 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:43.925 true 00:05:44.186 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:44.186 13:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.128 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.128 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:45.128 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:45.128 true 00:05:45.388 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:45.388 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.388 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.649 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:45.649 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:45.910 true 00:05:45.910 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:45.910 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.970 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.231 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:47.231 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:47.231 true 00:05:47.492 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:47.492 13:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.323 13:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.323 13:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:48.323 13:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:48.582 true 00:05:48.582 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:48.582 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.842 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.842 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:48.842 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:49.102 true 00:05:49.103 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:49.103 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.362 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.362 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:49.362 13:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:49.621 true 00:05:49.621 13:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:49.621 13:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.880 13:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.137 13:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:50.137 13:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:50.137 true 00:05:50.137 13:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:50.137 13:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.394 13:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.651 13:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:50.651 13:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:50.651 true 00:05:50.651 13:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:50.651 13:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.586 13:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.844 13:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:51.844 13:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:51.844 true 00:05:51.844 13:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:51.844 13:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.103 13:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.362 13:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:52.362 13:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:52.362 true 00:05:52.362 13:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:52.362 13:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.622 13:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.882 13:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:52.882 13:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:53.141 true 00:05:53.141 13:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:53.141 13:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.141 13:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.401 13:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:53.401 13:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:53.662 true 00:05:53.662 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:53.662 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.603 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.863 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:54.863 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:55.124 true 00:05:55.124 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:55.124 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.066 13:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.066 13:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:56.066 13:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:56.327 true 00:05:56.327 13:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:56.327 13:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.327 13:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.586 13:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:56.586 13:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:56.846 true 00:05:56.846 13:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:56.846 13:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.048 13:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.048 13:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:58.048 13:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:58.308 true 00:05:58.308 13:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:58.308 13:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.247 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.247 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:59.247 13:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:59.507 true 00:05:59.507 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:05:59.507 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.766 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.766 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:59.766 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:00.025 true 00:06:00.025 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:06:00.025 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.285 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.285 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:00.285 13:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:00.546 true 00:06:00.546 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:06:00.546 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.808 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.068 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:01.068 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:01.068 true 00:06:01.068 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:06:01.068 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.328 13:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.589 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:01.589 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:01.589 true 00:06:01.589 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:06:01.589 13:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.531 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.791 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:02.791 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:02.791 true 00:06:02.791 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:06:02.791 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.052 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.313 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:03.313 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:03.313 true 00:06:03.313 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:06:03.313 13:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.697 Initializing NVMe Controllers 00:06:04.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:04.697 Controller IO queue size 128, less than required. 00:06:04.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:04.697 Controller IO queue size 128, less than required. 00:06:04.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:04.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:04.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:04.697 Initialization complete. Launching workers. 00:06:04.697 ======================================================== 00:06:04.697 Latency(us) 00:06:04.697 Device Information : IOPS MiB/s Average min max 00:06:04.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3012.37 1.47 22548.70 1060.91 1284488.40 00:06:04.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15487.23 7.56 8264.76 1128.60 402294.38 00:06:04.697 ======================================================== 00:06:04.697 Total : 18499.60 9.03 10590.67 1060.91 1284488.40 00:06:04.697 00:06:04.697 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.697 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:04.698 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:04.958 true 00:06:04.958 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1934845 00:06:04.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1934845) - No such process 00:06:04.958 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1934845 00:06:04.958 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.958 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.219 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:05.219 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:05.219 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:05.219 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.219 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:05.480 null0 00:06:05.480 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.480 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.480 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:05.480 null1 00:06:05.741 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.741 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.741 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:05.741 null2 00:06:05.741 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.741 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.741 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:06.002 null3 00:06:06.002 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:06.002 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:06.002 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:06.263 null4 00:06:06.263 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:06.263 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:06.263 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:06.263 null5 00:06:06.263 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:06.263 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:06.263 13:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:06.523 null6 00:06:06.523 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:06.523 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:06.523 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:06.782 null7 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.782 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:06.783 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:06.783 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.783 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1941664 1941665 1941667 1941669 1941671 1941673 1941675 1941677 00:06:06.783 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:06.783 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:06.783 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:06.783 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.783 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.783 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.042 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.301 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.302 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.302 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.302 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.302 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.302 13:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.561 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.821 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:08.081 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:08.362 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.362 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.362 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:08.362 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:08.362 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.363 13:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:08.363 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.363 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.363 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:08.622 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.881 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.882 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:08.882 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.882 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.882 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.141 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.142 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.142 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.142 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.142 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.142 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.142 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.142 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.142 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.401 13:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.401 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.660 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.922 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.181 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.181 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.181 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.181 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.181 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.181 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.182 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.441 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.441 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.441 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.441 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.441 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.441 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.441 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.441 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.441 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.441 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.441 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.441 13:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.441 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.441 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.442 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.442 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.702 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.702 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.702 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.702 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.702 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:10.702 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:10.702 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:10.702 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:10.702 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:10.702 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:10.702 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:10.702 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:10.702 rmmod nvme_tcp 00:06:10.702 rmmod nvme_fabrics 00:06:10.702 rmmod nvme_keyring 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1934431 ']' 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1934431 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1934431 ']' 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1934431 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1934431 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1934431' 00:06:10.961 killing process with pid 1934431 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1934431 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1934431 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.961 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:13.504 00:06:13.504 real 0m49.693s 00:06:13.504 user 3m15.622s 00:06:13.504 sys 0m16.208s 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:13.504 ************************************ 00:06:13.504 END TEST nvmf_ns_hotplug_stress 00:06:13.504 ************************************ 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:13.504 ************************************ 00:06:13.504 START TEST nvmf_delete_subsystem 00:06:13.504 ************************************ 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:13.504 * Looking for test storage... 00:06:13.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.504 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:13.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.505 --rc genhtml_branch_coverage=1 00:06:13.505 --rc genhtml_function_coverage=1 00:06:13.505 --rc genhtml_legend=1 00:06:13.505 --rc geninfo_all_blocks=1 00:06:13.505 --rc geninfo_unexecuted_blocks=1 00:06:13.505 00:06:13.505 ' 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:13.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.505 --rc genhtml_branch_coverage=1 00:06:13.505 --rc genhtml_function_coverage=1 00:06:13.505 --rc genhtml_legend=1 00:06:13.505 --rc geninfo_all_blocks=1 00:06:13.505 --rc geninfo_unexecuted_blocks=1 00:06:13.505 00:06:13.505 ' 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:13.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.505 --rc genhtml_branch_coverage=1 00:06:13.505 --rc genhtml_function_coverage=1 00:06:13.505 --rc genhtml_legend=1 00:06:13.505 --rc geninfo_all_blocks=1 00:06:13.505 --rc geninfo_unexecuted_blocks=1 00:06:13.505 00:06:13.505 ' 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:13.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.505 --rc genhtml_branch_coverage=1 00:06:13.505 --rc genhtml_function_coverage=1 00:06:13.505 --rc genhtml_legend=1 00:06:13.505 --rc geninfo_all_blocks=1 00:06:13.505 --rc geninfo_unexecuted_blocks=1 00:06:13.505 00:06:13.505 ' 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:13.505 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.506 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.506 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.506 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:13.506 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:13.506 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:13.506 13:13:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:21.656 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.656 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:21.657 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:21.657 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:21.657 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:21.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:21.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:06:21.657 00:06:21.657 --- 10.0.0.2 ping statistics --- 00:06:21.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.657 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:21.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:21.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:06:21.657 00:06:21.657 --- 10.0.0.1 ping statistics --- 00:06:21.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.657 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1946847 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1946847 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1946847 ']' 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.657 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.657 [2024-12-06 13:14:07.552281] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:21.657 [2024-12-06 13:14:07.552350] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.657 [2024-12-06 13:14:07.650699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.657 [2024-12-06 13:14:07.701654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:21.657 [2024-12-06 13:14:07.701706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:21.657 [2024-12-06 13:14:07.701720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:21.657 [2024-12-06 13:14:07.701727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:21.657 [2024-12-06 13:14:07.701733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:21.657 [2024-12-06 13:14:07.703370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.657 [2024-12-06 13:14:07.703375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.918 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.918 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:21.918 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.919 [2024-12-06 13:14:08.407573] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.919 [2024-12-06 13:14:08.431867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.919 NULL1 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.919 Delay0 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1947141 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:21.919 13:14:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:21.919 [2024-12-06 13:14:08.558956] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:23.829 13:14:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:23.829 13:14:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.829 13:14:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 [2024-12-06 13:14:10.687976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf802c0 is same with the state(6) to be set 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Read completed with error (sct=0, sc=8) 00:06:24.089 Write completed with error (sct=0, sc=8) 00:06:24.089 starting I/O failed: -6 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 starting I/O failed: -6 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 starting I/O failed: -6 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 starting I/O failed: -6 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 starting I/O failed: -6 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 starting I/O failed: -6 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 starting I/O failed: -6 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 starting I/O failed: -6 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 [2024-12-06 13:14:10.688701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbcdc00d680 is same with the state(6) to be set 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Write completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:24.090 [2024-12-06 13:14:10.689215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf80680 is same with the state(6) to be set 00:06:24.090 Read completed with error (sct=0, sc=8) 00:06:25.031 [2024-12-06 13:14:11.657860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf819b0 is same with the state(6) to be set 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 [2024-12-06 13:14:11.690427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbcdc000c40 is same with the state(6) to be set 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 [2024-12-06 13:14:11.690534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbcdc00d350 is same with the state(6) to be set 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 Read completed with error (sct=0, sc=8) 00:06:25.291 [2024-12-06 13:14:11.692026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf804a0 is same with the state(6) to be set 00:06:25.291 Write completed with error (sct=0, sc=8) 00:06:25.292 Write completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Write completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Write completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Write completed with error (sct=0, sc=8) 00:06:25.292 Write completed with error (sct=0, sc=8) 00:06:25.292 Write completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Write completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Write completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Read completed with error (sct=0, sc=8) 00:06:25.292 Write completed with error (sct=0, sc=8) 00:06:25.292 [2024-12-06 13:14:11.692161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf80860 is same with the state(6) to be set 00:06:25.292 Initializing NVMe Controllers 00:06:25.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:25.292 Controller IO queue size 128, less than required. 00:06:25.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:25.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:25.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:25.292 Initialization complete. Launching workers. 00:06:25.292 ======================================================== 00:06:25.292 Latency(us) 00:06:25.292 Device Information : IOPS MiB/s Average min max 00:06:25.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.95 0.08 892165.09 1014.85 1010762.27 00:06:25.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.01 0.08 946084.34 537.82 2001347.43 00:06:25.292 ======================================================== 00:06:25.292 Total : 332.96 0.16 918239.47 537.82 2001347.43 00:06:25.292 00:06:25.292 [2024-12-06 13:14:11.692869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf819b0 (9): Bad file descriptor 00:06:25.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:25.292 13:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.292 13:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:25.292 13:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1947141 00:06:25.292 13:14:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1947141 00:06:25.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1947141) - No such process 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1947141 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1947141 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1947141 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.552 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:25.812 [2024-12-06 13:14:12.225675] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1947877 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1947877 00:06:25.812 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:25.812 [2024-12-06 13:14:12.331246] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:26.382 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.382 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1947877 00:06:26.382 13:14:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.642 13:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.642 13:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1947877 00:06:26.642 13:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.210 13:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.210 13:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1947877 00:06:27.210 13:14:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.778 13:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.778 13:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1947877 00:06:27.778 13:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.346 13:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.346 13:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1947877 00:06:28.346 13:14:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.969 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.969 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1947877 00:06:28.969 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.969 Initializing NVMe Controllers 00:06:28.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:28.969 Controller IO queue size 128, less than required. 00:06:28.969 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:28.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:28.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:28.969 Initialization complete. Launching workers. 00:06:28.969 ======================================================== 00:06:28.969 Latency(us) 00:06:28.969 Device Information : IOPS MiB/s Average min max 00:06:28.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002042.08 1000100.51 1005765.40 00:06:28.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003025.49 1000386.99 1008007.44 00:06:28.969 ======================================================== 00:06:28.969 Total : 256.00 0.12 1002533.79 1000100.51 1008007.44 00:06:28.969 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1947877 00:06:29.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1947877) - No such process 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1947877 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:29.228 rmmod nvme_tcp 00:06:29.228 rmmod nvme_fabrics 00:06:29.228 rmmod nvme_keyring 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1946847 ']' 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1946847 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1946847 ']' 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1946847 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.228 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1946847 00:06:29.487 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.487 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.487 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1946847' 00:06:29.487 killing process with pid 1946847 00:06:29.487 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1946847 00:06:29.487 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1946847 00:06:29.487 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:29.487 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:29.487 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:29.487 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:29.487 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:29.487 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:29.487 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:29.487 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:29.487 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:29.487 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.487 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.487 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.030 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:32.030 00:06:32.031 real 0m18.391s 00:06:32.031 user 0m30.876s 00:06:32.031 sys 0m6.765s 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.031 ************************************ 00:06:32.031 END TEST nvmf_delete_subsystem 00:06:32.031 ************************************ 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:32.031 ************************************ 00:06:32.031 START TEST nvmf_host_management 00:06:32.031 ************************************ 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:32.031 * Looking for test storage... 00:06:32.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:32.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.031 --rc genhtml_branch_coverage=1 00:06:32.031 --rc genhtml_function_coverage=1 00:06:32.031 --rc genhtml_legend=1 00:06:32.031 --rc geninfo_all_blocks=1 00:06:32.031 --rc geninfo_unexecuted_blocks=1 00:06:32.031 00:06:32.031 ' 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:32.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.031 --rc genhtml_branch_coverage=1 00:06:32.031 --rc genhtml_function_coverage=1 00:06:32.031 --rc genhtml_legend=1 00:06:32.031 --rc geninfo_all_blocks=1 00:06:32.031 --rc geninfo_unexecuted_blocks=1 00:06:32.031 00:06:32.031 ' 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:32.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.031 --rc genhtml_branch_coverage=1 00:06:32.031 --rc genhtml_function_coverage=1 00:06:32.031 --rc genhtml_legend=1 00:06:32.031 --rc geninfo_all_blocks=1 00:06:32.031 --rc geninfo_unexecuted_blocks=1 00:06:32.031 00:06:32.031 ' 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:32.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.031 --rc genhtml_branch_coverage=1 00:06:32.031 --rc genhtml_function_coverage=1 00:06:32.031 --rc genhtml_legend=1 00:06:32.031 --rc geninfo_all_blocks=1 00:06:32.031 --rc geninfo_unexecuted_blocks=1 00:06:32.031 00:06:32.031 ' 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.031 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:32.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:32.032 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.173 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:40.173 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:40.173 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:40.173 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:40.173 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:40.173 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:40.173 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:40.173 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:40.173 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:40.174 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:40.174 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:40.174 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:40.174 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:40.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:40.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:06:40.174 00:06:40.174 --- 10.0.0.2 ping statistics --- 00:06:40.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.174 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:06:40.174 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:40.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:40.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:06:40.174 00:06:40.175 --- 10.0.0.1 ping statistics --- 00:06:40.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.175 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1952894 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1952894 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1952894 ']' 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.175 13:14:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.175 [2024-12-06 13:14:26.050982] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:40.175 [2024-12-06 13:14:26.051051] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.175 [2024-12-06 13:14:26.153924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.175 [2024-12-06 13:14:26.207313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:40.175 [2024-12-06 13:14:26.207366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:40.175 [2024-12-06 13:14:26.207375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:40.175 [2024-12-06 13:14:26.207383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:40.175 [2024-12-06 13:14:26.207389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:40.175 [2024-12-06 13:14:26.209526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.175 [2024-12-06 13:14:26.209670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.175 [2024-12-06 13:14:26.209829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.175 [2024-12-06 13:14:26.209829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:40.435 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.435 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:40.435 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:40.435 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.435 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.435 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:40.435 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:40.435 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.435 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.436 [2024-12-06 13:14:26.925423] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.436 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.436 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:40.436 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.436 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.436 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:40.436 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:40.436 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:40.436 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.436 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.436 Malloc0 00:06:40.436 [2024-12-06 13:14:27.006933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1952984 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1952984 /var/tmp/bdevperf.sock 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1952984 ']' 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:40.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:40.436 { 00:06:40.436 "params": { 00:06:40.436 "name": "Nvme$subsystem", 00:06:40.436 "trtype": "$TEST_TRANSPORT", 00:06:40.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:40.436 "adrfam": "ipv4", 00:06:40.436 "trsvcid": "$NVMF_PORT", 00:06:40.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:40.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:40.436 "hdgst": ${hdgst:-false}, 00:06:40.436 "ddgst": ${ddgst:-false} 00:06:40.436 }, 00:06:40.436 "method": "bdev_nvme_attach_controller" 00:06:40.436 } 00:06:40.436 EOF 00:06:40.436 )") 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:40.436 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:40.436 "params": { 00:06:40.436 "name": "Nvme0", 00:06:40.436 "trtype": "tcp", 00:06:40.436 "traddr": "10.0.0.2", 00:06:40.436 "adrfam": "ipv4", 00:06:40.436 "trsvcid": "4420", 00:06:40.436 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:40.436 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:40.436 "hdgst": false, 00:06:40.436 "ddgst": false 00:06:40.436 }, 00:06:40.436 "method": "bdev_nvme_attach_controller" 00:06:40.436 }' 00:06:40.697 [2024-12-06 13:14:27.117794] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:40.697 [2024-12-06 13:14:27.117867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1952984 ] 00:06:40.697 [2024-12-06 13:14:27.211325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.697 [2024-12-06 13:14:27.264159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.957 Running I/O for 10 seconds... 00:06:41.529 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.529 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:41.529 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:41.529 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.529 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.529 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.529 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:41.530 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:41.530 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:41.530 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:41.530 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:41.530 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:41.530 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:41.530 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:41.530 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:41.530 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:41.530 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.530 13:14:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.530 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.530 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:06:41.530 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:06:41.530 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:41.530 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:41.530 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:41.530 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:41.530 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.530 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.530 [2024-12-06 13:14:28.027957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.530 [2024-12-06 13:14:28.028390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.530 [2024-12-06 13:14:28.028397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.531 [2024-12-06 13:14:28.028958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.531 [2024-12-06 13:14:28.028967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.532 [2024-12-06 13:14:28.028976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.532 [2024-12-06 13:14:28.028986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.532 [2024-12-06 13:14:28.028998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.532 [2024-12-06 13:14:28.029010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.532 [2024-12-06 13:14:28.029018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.532 [2024-12-06 13:14:28.029028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.532 [2024-12-06 13:14:28.029035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.532 [2024-12-06 13:14:28.029045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.532 [2024-12-06 13:14:28.029055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.532 [2024-12-06 13:14:28.029066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.532 [2024-12-06 13:14:28.029073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.532 [2024-12-06 13:14:28.029084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.532 [2024-12-06 13:14:28.029091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.532 [2024-12-06 13:14:28.029101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.532 [2024-12-06 13:14:28.029108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.532 [2024-12-06 13:14:28.029119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.532 [2024-12-06 13:14:28.029127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.532 [2024-12-06 13:14:28.029137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.532 [2024-12-06 13:14:28.029144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.532 [2024-12-06 13:14:28.029154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.532 [2024-12-06 13:14:28.029161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.532 [2024-12-06 13:14:28.029172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.532 [2024-12-06 13:14:28.029180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.532 [2024-12-06 13:14:28.029190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.532 [2024-12-06 13:14:28.029198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:41.532 [2024-12-06 13:14:28.029208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19af0 is same with the state(6) to be set 00:06:41.532 [2024-12-06 13:14:28.030521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:41.532 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.532 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:41.532 task offset: 114560 on job bdev=Nvme0n1 fails 00:06:41.532 00:06:41.532 Latency(us) 00:06:41.532 [2024-12-06T12:14:28.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:41.532 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:41.532 Job: Nvme0n1 ended in about 0.59 seconds with error 00:06:41.532 Verification LBA range: start 0x0 length 0x400 00:06:41.532 Nvme0n1 : 0.59 1398.79 87.42 107.60 0.00 41486.45 5843.63 37137.07 00:06:41.532 [2024-12-06T12:14:28.191Z] =================================================================================================================== 00:06:41.532 [2024-12-06T12:14:28.191Z] Total : 1398.79 87.42 107.60 0.00 41486.45 5843.63 37137.07 00:06:41.532 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.532 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.532 [2024-12-06 13:14:28.032783] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.532 [2024-12-06 13:14:28.032824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x800c20 (9): Bad file descriptor 00:06:41.532 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.532 13:14:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:41.532 [2024-12-06 13:14:28.167686] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:42.555 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1952984 00:06:42.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1952984) - No such process 00:06:42.555 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:42.555 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:42.555 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:42.555 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:42.555 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:42.555 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:42.555 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:42.555 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:42.555 { 00:06:42.555 "params": { 00:06:42.555 "name": "Nvme$subsystem", 00:06:42.555 "trtype": "$TEST_TRANSPORT", 00:06:42.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:42.555 "adrfam": "ipv4", 00:06:42.555 "trsvcid": "$NVMF_PORT", 00:06:42.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:42.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:42.555 "hdgst": ${hdgst:-false}, 00:06:42.555 "ddgst": ${ddgst:-false} 00:06:42.555 }, 00:06:42.555 "method": "bdev_nvme_attach_controller" 00:06:42.555 } 00:06:42.555 EOF 00:06:42.555 )") 00:06:42.555 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:42.555 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:42.555 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:42.555 13:14:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:42.555 "params": { 00:06:42.555 "name": "Nvme0", 00:06:42.555 "trtype": "tcp", 00:06:42.555 "traddr": "10.0.0.2", 00:06:42.555 "adrfam": "ipv4", 00:06:42.555 "trsvcid": "4420", 00:06:42.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:42.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:42.555 "hdgst": false, 00:06:42.555 "ddgst": false 00:06:42.555 }, 00:06:42.555 "method": "bdev_nvme_attach_controller" 00:06:42.555 }' 00:06:42.555 [2024-12-06 13:14:29.102602] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:42.555 [2024-12-06 13:14:29.102657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953476 ] 00:06:42.555 [2024-12-06 13:14:29.190003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.857 [2024-12-06 13:14:29.225124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.857 Running I/O for 1 seconds... 00:06:44.238 1604.00 IOPS, 100.25 MiB/s 00:06:44.239 Latency(us) 00:06:44.239 [2024-12-06T12:14:30.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:44.239 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:44.239 Verification LBA range: start 0x0 length 0x400 00:06:44.239 Nvme0n1 : 1.02 1637.96 102.37 0.00 0.00 38399.82 5324.80 32549.55 00:06:44.239 [2024-12-06T12:14:30.898Z] =================================================================================================================== 00:06:44.239 [2024-12-06T12:14:30.898Z] Total : 1637.96 102.37 0.00 0.00 38399.82 5324.80 32549.55 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:44.239 rmmod nvme_tcp 00:06:44.239 rmmod nvme_fabrics 00:06:44.239 rmmod nvme_keyring 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1952894 ']' 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1952894 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1952894 ']' 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1952894 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1952894 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1952894' 00:06:44.239 killing process with pid 1952894 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1952894 00:06:44.239 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1952894 00:06:44.239 [2024-12-06 13:14:30.875467] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:44.499 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:44.499 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:44.499 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:44.499 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:44.499 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:44.499 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:44.499 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:44.499 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:44.499 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:44.499 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.499 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.499 13:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.411 13:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:46.411 13:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:46.411 00:06:46.411 real 0m14.799s 00:06:46.411 user 0m23.596s 00:06:46.411 sys 0m6.882s 00:06:46.411 13:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.411 13:14:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:46.411 ************************************ 00:06:46.411 END TEST nvmf_host_management 00:06:46.411 ************************************ 00:06:46.411 13:14:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:46.411 13:14:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.411 13:14:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.411 13:14:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:46.411 ************************************ 00:06:46.411 START TEST nvmf_lvol 00:06:46.411 ************************************ 00:06:46.411 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:46.671 * Looking for test storage... 00:06:46.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.671 --rc genhtml_branch_coverage=1 00:06:46.671 --rc genhtml_function_coverage=1 00:06:46.671 --rc genhtml_legend=1 00:06:46.671 --rc geninfo_all_blocks=1 00:06:46.671 --rc geninfo_unexecuted_blocks=1 00:06:46.671 00:06:46.671 ' 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.671 --rc genhtml_branch_coverage=1 00:06:46.671 --rc genhtml_function_coverage=1 00:06:46.671 --rc genhtml_legend=1 00:06:46.671 --rc geninfo_all_blocks=1 00:06:46.671 --rc geninfo_unexecuted_blocks=1 00:06:46.671 00:06:46.671 ' 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.671 --rc genhtml_branch_coverage=1 00:06:46.671 --rc genhtml_function_coverage=1 00:06:46.671 --rc genhtml_legend=1 00:06:46.671 --rc geninfo_all_blocks=1 00:06:46.671 --rc geninfo_unexecuted_blocks=1 00:06:46.671 00:06:46.671 ' 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.671 --rc genhtml_branch_coverage=1 00:06:46.671 --rc genhtml_function_coverage=1 00:06:46.671 --rc genhtml_legend=1 00:06:46.671 --rc geninfo_all_blocks=1 00:06:46.671 --rc geninfo_unexecuted_blocks=1 00:06:46.671 00:06:46.671 ' 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.671 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:46.672 13:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:54.809 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:54.809 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:54.809 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.809 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:54.810 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:54.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:06:54.810 00:06:54.810 --- 10.0.0.2 ping statistics --- 00:06:54.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.810 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:06:54.810 00:06:54.810 --- 10.0.0.1 ping statistics --- 00:06:54.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.810 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1957998 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1957998 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1957998 ']' 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.810 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:54.810 [2024-12-06 13:14:40.884577] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:06:54.810 [2024-12-06 13:14:40.884643] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.810 [2024-12-06 13:14:40.985267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.810 [2024-12-06 13:14:41.038039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.810 [2024-12-06 13:14:41.038090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.810 [2024-12-06 13:14:41.038098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.810 [2024-12-06 13:14:41.038106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.810 [2024-12-06 13:14:41.038112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:54.810 [2024-12-06 13:14:41.040138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.810 [2024-12-06 13:14:41.040296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.810 [2024-12-06 13:14:41.040297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.071 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.071 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:55.071 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:55.071 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.071 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:55.332 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.332 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:55.332 [2024-12-06 13:14:41.912171] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.332 13:14:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:55.593 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:55.593 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:55.855 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:55.855 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:56.116 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:56.377 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e183df9f-22f6-4314-8f69-5e62c08ff1e4 00:06:56.377 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e183df9f-22f6-4314-8f69-5e62c08ff1e4 lvol 20 00:06:56.377 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b29b0a60-da42-41dc-9b3e-3ae29afbfeba 00:06:56.377 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:56.639 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b29b0a60-da42-41dc-9b3e-3ae29afbfeba 00:06:56.899 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:57.160 [2024-12-06 13:14:43.557600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:57.160 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:57.160 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1958698 00:06:57.160 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:57.160 13:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:58.102 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b29b0a60-da42-41dc-9b3e-3ae29afbfeba MY_SNAPSHOT 00:06:58.363 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d6283ed9-adad-453c-b6b5-2429c65249f5 00:06:58.363 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b29b0a60-da42-41dc-9b3e-3ae29afbfeba 30 00:06:58.623 13:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d6283ed9-adad-453c-b6b5-2429c65249f5 MY_CLONE 00:06:58.883 13:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3047395c-e6f3-478e-bf01-2eba334ead18 00:06:58.883 13:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3047395c-e6f3-478e-bf01-2eba334ead18 00:06:59.142 13:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1958698 00:07:09.139 Initializing NVMe Controllers 00:07:09.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:09.139 Controller IO queue size 128, less than required. 00:07:09.139 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:09.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:09.139 Initialization complete. Launching workers. 00:07:09.139 ======================================================== 00:07:09.139 Latency(us) 00:07:09.139 Device Information : IOPS MiB/s Average min max 00:07:09.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16183.34 63.22 7910.45 1513.01 67475.82 00:07:09.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17203.63 67.20 7440.79 1732.46 47774.50 00:07:09.139 ======================================================== 00:07:09.139 Total : 33386.97 130.42 7668.44 1513.01 67475.82 00:07:09.139 00:07:09.139 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:09.139 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b29b0a60-da42-41dc-9b3e-3ae29afbfeba 00:07:09.139 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e183df9f-22f6-4314-8f69-5e62c08ff1e4 00:07:09.139 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:09.139 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:09.140 rmmod nvme_tcp 00:07:09.140 rmmod nvme_fabrics 00:07:09.140 rmmod nvme_keyring 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1957998 ']' 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1957998 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1957998 ']' 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1957998 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1957998 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1957998' 00:07:09.140 killing process with pid 1957998 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1957998 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1957998 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.140 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.522 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:10.522 00:07:10.522 real 0m23.912s 00:07:10.522 user 1m4.771s 00:07:10.522 sys 0m8.574s 00:07:10.522 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.522 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:10.522 ************************************ 00:07:10.522 END TEST nvmf_lvol 00:07:10.522 ************************************ 00:07:10.522 13:14:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:10.522 13:14:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.522 13:14:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.522 13:14:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:10.522 ************************************ 00:07:10.522 START TEST nvmf_lvs_grow 00:07:10.522 ************************************ 00:07:10.522 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:10.522 * Looking for test storage... 00:07:10.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.522 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:10.522 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:10.522 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:10.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.784 --rc genhtml_branch_coverage=1 00:07:10.784 --rc genhtml_function_coverage=1 00:07:10.784 --rc genhtml_legend=1 00:07:10.784 --rc geninfo_all_blocks=1 00:07:10.784 --rc geninfo_unexecuted_blocks=1 00:07:10.784 00:07:10.784 ' 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:10.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.784 --rc genhtml_branch_coverage=1 00:07:10.784 --rc genhtml_function_coverage=1 00:07:10.784 --rc genhtml_legend=1 00:07:10.784 --rc geninfo_all_blocks=1 00:07:10.784 --rc geninfo_unexecuted_blocks=1 00:07:10.784 00:07:10.784 ' 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:10.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.784 --rc genhtml_branch_coverage=1 00:07:10.784 --rc genhtml_function_coverage=1 00:07:10.784 --rc genhtml_legend=1 00:07:10.784 --rc geninfo_all_blocks=1 00:07:10.784 --rc geninfo_unexecuted_blocks=1 00:07:10.784 00:07:10.784 ' 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:10.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.784 --rc genhtml_branch_coverage=1 00:07:10.784 --rc genhtml_function_coverage=1 00:07:10.784 --rc genhtml_legend=1 00:07:10.784 --rc geninfo_all_blocks=1 00:07:10.784 --rc geninfo_unexecuted_blocks=1 00:07:10.784 00:07:10.784 ' 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.784 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:10.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:10.785 13:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:18.927 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:18.927 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:18.927 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:18.927 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:18.927 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:18.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:07:18.928 00:07:18.928 --- 10.0.0.2 ping statistics --- 00:07:18.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.928 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:07:18.928 00:07:18.928 --- 10.0.0.1 ping statistics --- 00:07:18.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.928 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1965184 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1965184 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1965184 ']' 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.928 13:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.928 [2024-12-06 13:15:04.858752] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:18.928 [2024-12-06 13:15:04.858816] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.928 [2024-12-06 13:15:04.957587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.928 [2024-12-06 13:15:05.009515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.928 [2024-12-06 13:15:05.009561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.928 [2024-12-06 13:15:05.009569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.928 [2024-12-06 13:15:05.009576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.928 [2024-12-06 13:15:05.009582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.928 [2024-12-06 13:15:05.010357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.188 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.188 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:19.188 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:19.188 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:19.188 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:19.188 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.188 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:19.448 [2024-12-06 13:15:05.886120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:19.448 ************************************ 00:07:19.448 START TEST lvs_grow_clean 00:07:19.448 ************************************ 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:19.448 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:19.708 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:19.708 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:19.969 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bb261f8e-44fe-4064-8d2a-e94c4051eb5a 00:07:19.969 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb261f8e-44fe-4064-8d2a-e94c4051eb5a 00:07:19.969 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:19.969 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:19.969 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:19.969 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bb261f8e-44fe-4064-8d2a-e94c4051eb5a lvol 150 00:07:20.229 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2eecb1b8-ef4c-426b-a606-08fbb458ba6c 00:07:20.229 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:20.230 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:20.491 [2024-12-06 13:15:06.963031] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:20.491 [2024-12-06 13:15:06.963103] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:20.491 true 00:07:20.491 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb261f8e-44fe-4064-8d2a-e94c4051eb5a 00:07:20.491 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:20.752 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:20.752 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:20.752 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2eecb1b8-ef4c-426b-a606-08fbb458ba6c 00:07:21.013 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:21.274 [2024-12-06 13:15:07.673321] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.274 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:21.274 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1965887 00:07:21.274 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:21.274 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:21.274 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1965887 /var/tmp/bdevperf.sock 00:07:21.274 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1965887 ']' 00:07:21.274 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:21.274 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.274 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:21.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:21.274 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.274 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:21.535 [2024-12-06 13:15:07.936887] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:21.535 [2024-12-06 13:15:07.936957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1965887 ] 00:07:21.535 [2024-12-06 13:15:08.028691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.535 [2024-12-06 13:15:08.080659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.107 13:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.107 13:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:22.107 13:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:22.680 Nvme0n1 00:07:22.680 13:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:22.680 [ 00:07:22.680 { 00:07:22.680 "name": "Nvme0n1", 00:07:22.680 "aliases": [ 00:07:22.680 "2eecb1b8-ef4c-426b-a606-08fbb458ba6c" 00:07:22.680 ], 00:07:22.680 "product_name": "NVMe disk", 00:07:22.680 "block_size": 4096, 00:07:22.680 "num_blocks": 38912, 00:07:22.680 "uuid": "2eecb1b8-ef4c-426b-a606-08fbb458ba6c", 00:07:22.680 "numa_id": 0, 00:07:22.680 "assigned_rate_limits": { 00:07:22.680 "rw_ios_per_sec": 0, 00:07:22.680 "rw_mbytes_per_sec": 0, 00:07:22.680 "r_mbytes_per_sec": 0, 00:07:22.680 "w_mbytes_per_sec": 0 00:07:22.680 }, 00:07:22.680 "claimed": false, 00:07:22.680 "zoned": false, 00:07:22.680 "supported_io_types": { 00:07:22.680 "read": true, 00:07:22.680 "write": true, 00:07:22.680 "unmap": true, 00:07:22.680 "flush": true, 00:07:22.680 "reset": true, 00:07:22.680 "nvme_admin": true, 00:07:22.680 "nvme_io": true, 00:07:22.680 "nvme_io_md": false, 00:07:22.680 "write_zeroes": true, 00:07:22.680 "zcopy": false, 00:07:22.680 "get_zone_info": false, 00:07:22.680 "zone_management": false, 00:07:22.680 "zone_append": false, 00:07:22.680 "compare": true, 00:07:22.680 "compare_and_write": true, 00:07:22.680 "abort": true, 00:07:22.680 "seek_hole": false, 00:07:22.680 "seek_data": false, 00:07:22.680 "copy": true, 00:07:22.680 "nvme_iov_md": false 00:07:22.680 }, 00:07:22.680 "memory_domains": [ 00:07:22.680 { 00:07:22.680 "dma_device_id": "system", 00:07:22.680 "dma_device_type": 1 00:07:22.680 } 00:07:22.680 ], 00:07:22.680 "driver_specific": { 00:07:22.680 "nvme": [ 00:07:22.680 { 00:07:22.680 "trid": { 00:07:22.680 "trtype": "TCP", 00:07:22.680 "adrfam": "IPv4", 00:07:22.680 "traddr": "10.0.0.2", 00:07:22.680 "trsvcid": "4420", 00:07:22.680 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:22.680 }, 00:07:22.680 "ctrlr_data": { 00:07:22.680 "cntlid": 1, 00:07:22.680 "vendor_id": "0x8086", 00:07:22.680 "model_number": "SPDK bdev Controller", 00:07:22.680 "serial_number": "SPDK0", 00:07:22.680 "firmware_revision": "25.01", 00:07:22.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:22.680 "oacs": { 00:07:22.680 "security": 0, 00:07:22.680 "format": 0, 00:07:22.680 "firmware": 0, 00:07:22.680 "ns_manage": 0 00:07:22.680 }, 00:07:22.680 "multi_ctrlr": true, 00:07:22.680 "ana_reporting": false 00:07:22.680 }, 00:07:22.680 "vs": { 00:07:22.680 "nvme_version": "1.3" 00:07:22.680 }, 00:07:22.680 "ns_data": { 00:07:22.680 "id": 1, 00:07:22.680 "can_share": true 00:07:22.680 } 00:07:22.680 } 00:07:22.680 ], 00:07:22.680 "mp_policy": "active_passive" 00:07:22.680 } 00:07:22.680 } 00:07:22.680 ] 00:07:22.680 13:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1966357 00:07:22.680 13:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:22.680 13:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:22.941 Running I/O for 10 seconds... 00:07:23.998 Latency(us) 00:07:23.998 [2024-12-06T12:15:10.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.998 Nvme0n1 : 1.00 25109.00 98.08 0.00 0.00 0.00 0.00 0.00 00:07:23.998 [2024-12-06T12:15:10.657Z] =================================================================================================================== 00:07:23.998 [2024-12-06T12:15:10.657Z] Total : 25109.00 98.08 0.00 0.00 0.00 0.00 0.00 00:07:23.998 00:07:24.935 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bb261f8e-44fe-4064-8d2a-e94c4051eb5a 00:07:24.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.935 Nvme0n1 : 2.00 25336.00 98.97 0.00 0.00 0.00 0.00 0.00 00:07:24.935 [2024-12-06T12:15:11.594Z] =================================================================================================================== 00:07:24.935 [2024-12-06T12:15:11.594Z] Total : 25336.00 98.97 0.00 0.00 0.00 0.00 0.00 00:07:24.935 00:07:24.935 true 00:07:24.935 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb261f8e-44fe-4064-8d2a-e94c4051eb5a 00:07:24.935 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:25.195 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:25.195 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:25.195 13:15:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1966357 00:07:25.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.765 Nvme0n1 : 3.00 25422.33 99.31 0.00 0.00 0.00 0.00 0.00 00:07:25.765 [2024-12-06T12:15:12.424Z] =================================================================================================================== 00:07:25.765 [2024-12-06T12:15:12.424Z] Total : 25422.33 99.31 0.00 0.00 0.00 0.00 0.00 00:07:25.765 00:07:27.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.146 Nvme0n1 : 4.00 25494.75 99.59 0.00 0.00 0.00 0.00 0.00 00:07:27.146 [2024-12-06T12:15:13.805Z] =================================================================================================================== 00:07:27.146 [2024-12-06T12:15:13.805Z] Total : 25494.75 99.59 0.00 0.00 0.00 0.00 0.00 00:07:27.146 00:07:28.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.085 Nvme0n1 : 5.00 25544.40 99.78 0.00 0.00 0.00 0.00 0.00 00:07:28.085 [2024-12-06T12:15:14.744Z] =================================================================================================================== 00:07:28.085 [2024-12-06T12:15:14.744Z] Total : 25544.40 99.78 0.00 0.00 0.00 0.00 0.00 00:07:28.085 00:07:29.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.031 Nvme0n1 : 6.00 25582.50 99.93 0.00 0.00 0.00 0.00 0.00 00:07:29.031 [2024-12-06T12:15:15.690Z] =================================================================================================================== 00:07:29.031 [2024-12-06T12:15:15.690Z] Total : 25582.50 99.93 0.00 0.00 0.00 0.00 0.00 00:07:29.031 00:07:29.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.966 Nvme0n1 : 7.00 25603.29 100.01 0.00 0.00 0.00 0.00 0.00 00:07:29.966 [2024-12-06T12:15:16.625Z] =================================================================================================================== 00:07:29.966 [2024-12-06T12:15:16.625Z] Total : 25603.29 100.01 0.00 0.00 0.00 0.00 0.00 00:07:29.966 00:07:30.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.903 Nvme0n1 : 8.00 25626.38 100.10 0.00 0.00 0.00 0.00 0.00 00:07:30.903 [2024-12-06T12:15:17.562Z] =================================================================================================================== 00:07:30.903 [2024-12-06T12:15:17.562Z] Total : 25626.38 100.10 0.00 0.00 0.00 0.00 0.00 00:07:30.903 00:07:31.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.843 Nvme0n1 : 9.00 25644.56 100.17 0.00 0.00 0.00 0.00 0.00 00:07:31.843 [2024-12-06T12:15:18.502Z] =================================================================================================================== 00:07:31.843 [2024-12-06T12:15:18.502Z] Total : 25644.56 100.17 0.00 0.00 0.00 0.00 0.00 00:07:31.843 00:07:32.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.784 Nvme0n1 : 10.00 25659.30 100.23 0.00 0.00 0.00 0.00 0.00 00:07:32.784 [2024-12-06T12:15:19.443Z] =================================================================================================================== 00:07:32.784 [2024-12-06T12:15:19.443Z] Total : 25659.30 100.23 0.00 0.00 0.00 0.00 0.00 00:07:32.784 00:07:32.784 00:07:32.784 Latency(us) 00:07:32.784 [2024-12-06T12:15:19.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.784 Nvme0n1 : 10.00 25659.51 100.23 0.00 0.00 4984.62 2143.57 15182.51 00:07:32.784 [2024-12-06T12:15:19.443Z] =================================================================================================================== 00:07:32.784 [2024-12-06T12:15:19.443Z] Total : 25659.51 100.23 0.00 0.00 4984.62 2143.57 15182.51 00:07:32.784 { 00:07:32.784 "results": [ 00:07:32.784 { 00:07:32.784 "job": "Nvme0n1", 00:07:32.784 "core_mask": "0x2", 00:07:32.785 "workload": "randwrite", 00:07:32.785 "status": "finished", 00:07:32.785 "queue_depth": 128, 00:07:32.785 "io_size": 4096, 00:07:32.785 "runtime": 10.004906, 00:07:32.785 "iops": 25659.51144368573, 00:07:32.785 "mibps": 100.23246657689738, 00:07:32.785 "io_failed": 0, 00:07:32.785 "io_timeout": 0, 00:07:32.785 "avg_latency_us": 4984.622504690566, 00:07:32.785 "min_latency_us": 2143.5733333333333, 00:07:32.785 "max_latency_us": 15182.506666666666 00:07:32.785 } 00:07:32.785 ], 00:07:32.785 "core_count": 1 00:07:32.785 } 00:07:32.785 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1965887 00:07:32.785 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1965887 ']' 00:07:32.785 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1965887 00:07:33.045 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:33.045 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.045 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1965887 00:07:33.045 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:33.045 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:33.045 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1965887' 00:07:33.045 killing process with pid 1965887 00:07:33.045 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1965887 00:07:33.045 Received shutdown signal, test time was about 10.000000 seconds 00:07:33.045 00:07:33.045 Latency(us) 00:07:33.045 [2024-12-06T12:15:19.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.045 [2024-12-06T12:15:19.704Z] =================================================================================================================== 00:07:33.045 [2024-12-06T12:15:19.704Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:33.045 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1965887 00:07:33.045 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:33.305 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:33.565 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb261f8e-44fe-4064-8d2a-e94c4051eb5a 00:07:33.565 13:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:33.565 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:33.566 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:33.566 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:33.826 [2024-12-06 13:15:20.325677] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:33.826 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb261f8e-44fe-4064-8d2a-e94c4051eb5a 00:07:33.826 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:33.826 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb261f8e-44fe-4064-8d2a-e94c4051eb5a 00:07:33.826 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.826 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.826 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.826 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.826 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.826 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.826 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.826 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:33.826 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb261f8e-44fe-4064-8d2a-e94c4051eb5a 00:07:34.086 request: 00:07:34.086 { 00:07:34.086 "uuid": "bb261f8e-44fe-4064-8d2a-e94c4051eb5a", 00:07:34.086 "method": "bdev_lvol_get_lvstores", 00:07:34.086 "req_id": 1 00:07:34.086 } 00:07:34.086 Got JSON-RPC error response 00:07:34.086 response: 00:07:34.086 { 00:07:34.086 "code": -19, 00:07:34.086 "message": "No such device" 00:07:34.086 } 00:07:34.086 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:34.086 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.086 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.087 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.087 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:34.087 aio_bdev 00:07:34.087 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2eecb1b8-ef4c-426b-a606-08fbb458ba6c 00:07:34.087 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=2eecb1b8-ef4c-426b-a606-08fbb458ba6c 00:07:34.087 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:34.087 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:34.087 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:34.087 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:34.087 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:34.347 13:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2eecb1b8-ef4c-426b-a606-08fbb458ba6c -t 2000 00:07:34.607 [ 00:07:34.607 { 00:07:34.607 "name": "2eecb1b8-ef4c-426b-a606-08fbb458ba6c", 00:07:34.607 "aliases": [ 00:07:34.607 "lvs/lvol" 00:07:34.607 ], 00:07:34.607 "product_name": "Logical Volume", 00:07:34.607 "block_size": 4096, 00:07:34.607 "num_blocks": 38912, 00:07:34.607 "uuid": "2eecb1b8-ef4c-426b-a606-08fbb458ba6c", 00:07:34.607 "assigned_rate_limits": { 00:07:34.607 "rw_ios_per_sec": 0, 00:07:34.607 "rw_mbytes_per_sec": 0, 00:07:34.607 "r_mbytes_per_sec": 0, 00:07:34.607 "w_mbytes_per_sec": 0 00:07:34.607 }, 00:07:34.607 "claimed": false, 00:07:34.607 "zoned": false, 00:07:34.607 "supported_io_types": { 00:07:34.607 "read": true, 00:07:34.607 "write": true, 00:07:34.607 "unmap": true, 00:07:34.607 "flush": false, 00:07:34.607 "reset": true, 00:07:34.607 "nvme_admin": false, 00:07:34.607 "nvme_io": false, 00:07:34.607 "nvme_io_md": false, 00:07:34.607 "write_zeroes": true, 00:07:34.607 "zcopy": false, 00:07:34.607 "get_zone_info": false, 00:07:34.607 "zone_management": false, 00:07:34.607 "zone_append": false, 00:07:34.607 "compare": false, 00:07:34.607 "compare_and_write": false, 00:07:34.607 "abort": false, 00:07:34.607 "seek_hole": true, 00:07:34.607 "seek_data": true, 00:07:34.607 "copy": false, 00:07:34.607 "nvme_iov_md": false 00:07:34.607 }, 00:07:34.607 "driver_specific": { 00:07:34.607 "lvol": { 00:07:34.607 "lvol_store_uuid": "bb261f8e-44fe-4064-8d2a-e94c4051eb5a", 00:07:34.607 "base_bdev": "aio_bdev", 00:07:34.607 "thin_provision": false, 00:07:34.607 "num_allocated_clusters": 38, 00:07:34.607 "snapshot": false, 00:07:34.607 "clone": false, 00:07:34.607 "esnap_clone": false 00:07:34.607 } 00:07:34.607 } 00:07:34.607 } 00:07:34.607 ] 00:07:34.607 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:34.608 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb261f8e-44fe-4064-8d2a-e94c4051eb5a 00:07:34.608 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:34.608 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:34.608 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb261f8e-44fe-4064-8d2a-e94c4051eb5a 00:07:34.608 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:34.868 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:34.868 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2eecb1b8-ef4c-426b-a606-08fbb458ba6c 00:07:34.868 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bb261f8e-44fe-4064-8d2a-e94c4051eb5a 00:07:35.128 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:35.390 00:07:35.390 real 0m15.946s 00:07:35.390 user 0m15.627s 00:07:35.390 sys 0m1.427s 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:35.390 ************************************ 00:07:35.390 END TEST lvs_grow_clean 00:07:35.390 ************************************ 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:35.390 ************************************ 00:07:35.390 START TEST lvs_grow_dirty 00:07:35.390 ************************************ 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:35.390 13:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:35.652 13:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:35.652 13:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:35.912 13:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5c298afe-945e-44f2-a53d-4b671c967503 00:07:35.913 13:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c298afe-945e-44f2-a53d-4b671c967503 00:07:35.913 13:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:35.913 13:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:35.913 13:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:35.913 13:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5c298afe-945e-44f2-a53d-4b671c967503 lvol 150 00:07:36.174 13:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a4136990-9211-4a4c-ae87-f3e1469c4552 00:07:36.174 13:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:36.174 13:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:36.436 [2024-12-06 13:15:22.867626] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:36.436 [2024-12-06 13:15:22.867668] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:36.436 true 00:07:36.436 13:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c298afe-945e-44f2-a53d-4b671c967503 00:07:36.436 13:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:36.436 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:36.436 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:36.697 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a4136990-9211-4a4c-ae87-f3e1469c4552 00:07:36.989 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:36.989 [2024-12-06 13:15:23.545578] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.989 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:37.249 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1969470 00:07:37.249 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:37.249 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:37.249 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1969470 /var/tmp/bdevperf.sock 00:07:37.249 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1969470 ']' 00:07:37.249 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:37.249 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.249 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:37.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:37.250 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.250 13:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.250 [2024-12-06 13:15:23.761409] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:37.250 [2024-12-06 13:15:23.761465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969470 ] 00:07:37.250 [2024-12-06 13:15:23.844701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.250 [2024-12-06 13:15:23.874788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.191 13:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.191 13:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:38.191 13:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:38.450 Nvme0n1 00:07:38.450 13:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:38.450 [ 00:07:38.450 { 00:07:38.450 "name": "Nvme0n1", 00:07:38.450 "aliases": [ 00:07:38.450 "a4136990-9211-4a4c-ae87-f3e1469c4552" 00:07:38.450 ], 00:07:38.450 "product_name": "NVMe disk", 00:07:38.450 "block_size": 4096, 00:07:38.451 "num_blocks": 38912, 00:07:38.451 "uuid": "a4136990-9211-4a4c-ae87-f3e1469c4552", 00:07:38.451 "numa_id": 0, 00:07:38.451 "assigned_rate_limits": { 00:07:38.451 "rw_ios_per_sec": 0, 00:07:38.451 "rw_mbytes_per_sec": 0, 00:07:38.451 "r_mbytes_per_sec": 0, 00:07:38.451 "w_mbytes_per_sec": 0 00:07:38.451 }, 00:07:38.451 "claimed": false, 00:07:38.451 "zoned": false, 00:07:38.451 "supported_io_types": { 00:07:38.451 "read": true, 00:07:38.451 "write": true, 00:07:38.451 "unmap": true, 00:07:38.451 "flush": true, 00:07:38.451 "reset": true, 00:07:38.451 "nvme_admin": true, 00:07:38.451 "nvme_io": true, 00:07:38.451 "nvme_io_md": false, 00:07:38.451 "write_zeroes": true, 00:07:38.451 "zcopy": false, 00:07:38.451 "get_zone_info": false, 00:07:38.451 "zone_management": false, 00:07:38.451 "zone_append": false, 00:07:38.451 "compare": true, 00:07:38.451 "compare_and_write": true, 00:07:38.451 "abort": true, 00:07:38.451 "seek_hole": false, 00:07:38.451 "seek_data": false, 00:07:38.451 "copy": true, 00:07:38.451 "nvme_iov_md": false 00:07:38.451 }, 00:07:38.451 "memory_domains": [ 00:07:38.451 { 00:07:38.451 "dma_device_id": "system", 00:07:38.451 "dma_device_type": 1 00:07:38.451 } 00:07:38.451 ], 00:07:38.451 "driver_specific": { 00:07:38.451 "nvme": [ 00:07:38.451 { 00:07:38.451 "trid": { 00:07:38.451 "trtype": "TCP", 00:07:38.451 "adrfam": "IPv4", 00:07:38.451 "traddr": "10.0.0.2", 00:07:38.451 "trsvcid": "4420", 00:07:38.451 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:38.451 }, 00:07:38.451 "ctrlr_data": { 00:07:38.451 "cntlid": 1, 00:07:38.451 "vendor_id": "0x8086", 00:07:38.451 "model_number": "SPDK bdev Controller", 00:07:38.451 "serial_number": "SPDK0", 00:07:38.451 "firmware_revision": "25.01", 00:07:38.451 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:38.451 "oacs": { 00:07:38.451 "security": 0, 00:07:38.451 "format": 0, 00:07:38.451 "firmware": 0, 00:07:38.451 "ns_manage": 0 00:07:38.451 }, 00:07:38.451 "multi_ctrlr": true, 00:07:38.451 "ana_reporting": false 00:07:38.451 }, 00:07:38.451 "vs": { 00:07:38.451 "nvme_version": "1.3" 00:07:38.451 }, 00:07:38.451 "ns_data": { 00:07:38.451 "id": 1, 00:07:38.451 "can_share": true 00:07:38.451 } 00:07:38.451 } 00:07:38.451 ], 00:07:38.451 "mp_policy": "active_passive" 00:07:38.451 } 00:07:38.451 } 00:07:38.451 ] 00:07:38.710 13:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:38.710 13:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1969806 00:07:38.710 13:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:38.710 Running I/O for 10 seconds... 00:07:39.651 Latency(us) 00:07:39.651 [2024-12-06T12:15:26.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.651 Nvme0n1 : 1.00 25321.00 98.91 0.00 0.00 0.00 0.00 0.00 00:07:39.651 [2024-12-06T12:15:26.310Z] =================================================================================================================== 00:07:39.651 [2024-12-06T12:15:26.310Z] Total : 25321.00 98.91 0.00 0.00 0.00 0.00 0.00 00:07:39.651 00:07:40.594 13:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5c298afe-945e-44f2-a53d-4b671c967503 00:07:40.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.594 Nvme0n1 : 2.00 25492.00 99.58 0.00 0.00 0.00 0.00 0.00 00:07:40.594 [2024-12-06T12:15:27.253Z] =================================================================================================================== 00:07:40.594 [2024-12-06T12:15:27.253Z] Total : 25492.00 99.58 0.00 0.00 0.00 0.00 0.00 00:07:40.594 00:07:40.854 true 00:07:40.854 13:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c298afe-945e-44f2-a53d-4b671c967503 00:07:40.855 13:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:40.855 13:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:40.855 13:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:40.855 13:15:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1969806 00:07:41.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.797 Nvme0n1 : 3.00 25549.00 99.80 0.00 0.00 0.00 0.00 0.00 00:07:41.797 [2024-12-06T12:15:28.456Z] =================================================================================================================== 00:07:41.797 [2024-12-06T12:15:28.456Z] Total : 25549.00 99.80 0.00 0.00 0.00 0.00 0.00 00:07:41.797 00:07:42.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.740 Nvme0n1 : 4.00 25593.50 99.97 0.00 0.00 0.00 0.00 0.00 00:07:42.740 [2024-12-06T12:15:29.399Z] =================================================================================================================== 00:07:42.740 [2024-12-06T12:15:29.399Z] Total : 25593.50 99.97 0.00 0.00 0.00 0.00 0.00 00:07:42.740 00:07:43.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.683 Nvme0n1 : 5.00 25620.60 100.08 0.00 0.00 0.00 0.00 0.00 00:07:43.683 [2024-12-06T12:15:30.342Z] =================================================================================================================== 00:07:43.683 [2024-12-06T12:15:30.342Z] Total : 25620.60 100.08 0.00 0.00 0.00 0.00 0.00 00:07:43.683 00:07:44.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.624 Nvme0n1 : 6.00 25638.50 100.15 0.00 0.00 0.00 0.00 0.00 00:07:44.624 [2024-12-06T12:15:31.283Z] =================================================================================================================== 00:07:44.624 [2024-12-06T12:15:31.283Z] Total : 25638.50 100.15 0.00 0.00 0.00 0.00 0.00 00:07:44.624 00:07:45.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.566 Nvme0n1 : 7.00 25669.43 100.27 0.00 0.00 0.00 0.00 0.00 00:07:45.566 [2024-12-06T12:15:32.225Z] =================================================================================================================== 00:07:45.566 [2024-12-06T12:15:32.225Z] Total : 25669.43 100.27 0.00 0.00 0.00 0.00 0.00 00:07:45.566 00:07:46.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.951 Nvme0n1 : 8.00 25684.50 100.33 0.00 0.00 0.00 0.00 0.00 00:07:46.951 [2024-12-06T12:15:33.610Z] =================================================================================================================== 00:07:46.951 [2024-12-06T12:15:33.610Z] Total : 25684.50 100.33 0.00 0.00 0.00 0.00 0.00 00:07:46.951 00:07:47.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.894 Nvme0n1 : 9.00 25696.56 100.38 0.00 0.00 0.00 0.00 0.00 00:07:47.894 [2024-12-06T12:15:34.553Z] =================================================================================================================== 00:07:47.894 [2024-12-06T12:15:34.553Z] Total : 25696.56 100.38 0.00 0.00 0.00 0.00 0.00 00:07:47.894 00:07:48.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.836 Nvme0n1 : 10.00 25712.40 100.44 0.00 0.00 0.00 0.00 0.00 00:07:48.836 [2024-12-06T12:15:35.495Z] =================================================================================================================== 00:07:48.836 [2024-12-06T12:15:35.495Z] Total : 25712.40 100.44 0.00 0.00 0.00 0.00 0.00 00:07:48.836 00:07:48.836 00:07:48.836 Latency(us) 00:07:48.836 [2024-12-06T12:15:35.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.836 Nvme0n1 : 10.00 25710.54 100.43 0.00 0.00 4975.12 1549.65 8192.00 00:07:48.836 [2024-12-06T12:15:35.495Z] =================================================================================================================== 00:07:48.836 [2024-12-06T12:15:35.495Z] Total : 25710.54 100.43 0.00 0.00 4975.12 1549.65 8192.00 00:07:48.836 { 00:07:48.836 "results": [ 00:07:48.836 { 00:07:48.836 "job": "Nvme0n1", 00:07:48.836 "core_mask": "0x2", 00:07:48.836 "workload": "randwrite", 00:07:48.836 "status": "finished", 00:07:48.836 "queue_depth": 128, 00:07:48.836 "io_size": 4096, 00:07:48.836 "runtime": 10.003173, 00:07:48.836 "iops": 25710.54204500912, 00:07:48.836 "mibps": 100.43180486331687, 00:07:48.836 "io_failed": 0, 00:07:48.836 "io_timeout": 0, 00:07:48.836 "avg_latency_us": 4975.1209766175325, 00:07:48.836 "min_latency_us": 1549.6533333333334, 00:07:48.836 "max_latency_us": 8192.0 00:07:48.836 } 00:07:48.836 ], 00:07:48.836 "core_count": 1 00:07:48.836 } 00:07:48.836 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1969470 00:07:48.836 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1969470 ']' 00:07:48.836 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1969470 00:07:48.836 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:48.836 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.836 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1969470 00:07:48.836 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:48.836 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:48.836 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1969470' 00:07:48.836 killing process with pid 1969470 00:07:48.836 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1969470 00:07:48.836 Received shutdown signal, test time was about 10.000000 seconds 00:07:48.836 00:07:48.836 Latency(us) 00:07:48.836 [2024-12-06T12:15:35.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.836 [2024-12-06T12:15:35.495Z] =================================================================================================================== 00:07:48.836 [2024-12-06T12:15:35.495Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:48.836 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1969470 00:07:48.836 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.098 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:49.360 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c298afe-945e-44f2-a53d-4b671c967503 00:07:49.360 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:49.360 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:49.360 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:49.360 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1965184 00:07:49.360 13:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1965184 00:07:49.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1965184 Killed "${NVMF_APP[@]}" "$@" 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1972043 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1972043 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1972043 ']' 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.621 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:49.621 [2024-12-06 13:15:36.103269] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:07:49.621 [2024-12-06 13:15:36.103323] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.621 [2024-12-06 13:15:36.196774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.621 [2024-12-06 13:15:36.226707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.621 [2024-12-06 13:15:36.226734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.621 [2024-12-06 13:15:36.226740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.621 [2024-12-06 13:15:36.226744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.621 [2024-12-06 13:15:36.226748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.621 [2024-12-06 13:15:36.227168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.562 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.562 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:50.562 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:50.562 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:50.562 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:50.562 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.562 13:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:50.562 [2024-12-06 13:15:37.089579] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:50.562 [2024-12-06 13:15:37.089652] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:50.562 [2024-12-06 13:15:37.089675] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:50.562 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:50.562 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a4136990-9211-4a4c-ae87-f3e1469c4552 00:07:50.562 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a4136990-9211-4a4c-ae87-f3e1469c4552 00:07:50.562 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:50.562 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:50.562 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:50.562 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:50.562 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:50.821 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4136990-9211-4a4c-ae87-f3e1469c4552 -t 2000 00:07:50.821 [ 00:07:50.821 { 00:07:50.821 "name": "a4136990-9211-4a4c-ae87-f3e1469c4552", 00:07:50.821 "aliases": [ 00:07:50.821 "lvs/lvol" 00:07:50.821 ], 00:07:50.821 "product_name": "Logical Volume", 00:07:50.821 "block_size": 4096, 00:07:50.821 "num_blocks": 38912, 00:07:50.821 "uuid": "a4136990-9211-4a4c-ae87-f3e1469c4552", 00:07:50.821 "assigned_rate_limits": { 00:07:50.821 "rw_ios_per_sec": 0, 00:07:50.821 "rw_mbytes_per_sec": 0, 00:07:50.821 "r_mbytes_per_sec": 0, 00:07:50.821 "w_mbytes_per_sec": 0 00:07:50.821 }, 00:07:50.821 "claimed": false, 00:07:50.821 "zoned": false, 00:07:50.821 "supported_io_types": { 00:07:50.821 "read": true, 00:07:50.821 "write": true, 00:07:50.821 "unmap": true, 00:07:50.821 "flush": false, 00:07:50.821 "reset": true, 00:07:50.821 "nvme_admin": false, 00:07:50.821 "nvme_io": false, 00:07:50.821 "nvme_io_md": false, 00:07:50.821 "write_zeroes": true, 00:07:50.821 "zcopy": false, 00:07:50.821 "get_zone_info": false, 00:07:50.821 "zone_management": false, 00:07:50.821 "zone_append": false, 00:07:50.821 "compare": false, 00:07:50.821 "compare_and_write": false, 00:07:50.821 "abort": false, 00:07:50.821 "seek_hole": true, 00:07:50.821 "seek_data": true, 00:07:50.821 "copy": false, 00:07:50.821 "nvme_iov_md": false 00:07:50.821 }, 00:07:50.821 "driver_specific": { 00:07:50.821 "lvol": { 00:07:50.821 "lvol_store_uuid": "5c298afe-945e-44f2-a53d-4b671c967503", 00:07:50.821 "base_bdev": "aio_bdev", 00:07:50.821 "thin_provision": false, 00:07:50.821 "num_allocated_clusters": 38, 00:07:50.821 "snapshot": false, 00:07:50.821 "clone": false, 00:07:50.821 "esnap_clone": false 00:07:50.821 } 00:07:50.821 } 00:07:50.821 } 00:07:50.821 ] 00:07:50.821 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:50.821 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c298afe-945e-44f2-a53d-4b671c967503 00:07:50.821 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:51.080 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:51.081 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c298afe-945e-44f2-a53d-4b671c967503 00:07:51.081 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:51.341 [2024-12-06 13:15:37.938166] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c298afe-945e-44f2-a53d-4b671c967503 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c298afe-945e-44f2-a53d-4b671c967503 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:51.341 13:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c298afe-945e-44f2-a53d-4b671c967503 00:07:51.601 request: 00:07:51.601 { 00:07:51.601 "uuid": "5c298afe-945e-44f2-a53d-4b671c967503", 00:07:51.601 "method": "bdev_lvol_get_lvstores", 00:07:51.601 "req_id": 1 00:07:51.601 } 00:07:51.601 Got JSON-RPC error response 00:07:51.601 response: 00:07:51.601 { 00:07:51.601 "code": -19, 00:07:51.601 "message": "No such device" 00:07:51.601 } 00:07:51.601 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:51.601 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.601 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:51.601 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.601 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:51.861 aio_bdev 00:07:51.861 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a4136990-9211-4a4c-ae87-f3e1469c4552 00:07:51.861 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a4136990-9211-4a4c-ae87-f3e1469c4552 00:07:51.861 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.861 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:51.861 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.861 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.861 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:51.861 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4136990-9211-4a4c-ae87-f3e1469c4552 -t 2000 00:07:52.121 [ 00:07:52.121 { 00:07:52.121 "name": "a4136990-9211-4a4c-ae87-f3e1469c4552", 00:07:52.121 "aliases": [ 00:07:52.121 "lvs/lvol" 00:07:52.121 ], 00:07:52.121 "product_name": "Logical Volume", 00:07:52.121 "block_size": 4096, 00:07:52.121 "num_blocks": 38912, 00:07:52.121 "uuid": "a4136990-9211-4a4c-ae87-f3e1469c4552", 00:07:52.121 "assigned_rate_limits": { 00:07:52.121 "rw_ios_per_sec": 0, 00:07:52.121 "rw_mbytes_per_sec": 0, 00:07:52.121 "r_mbytes_per_sec": 0, 00:07:52.121 "w_mbytes_per_sec": 0 00:07:52.121 }, 00:07:52.121 "claimed": false, 00:07:52.121 "zoned": false, 00:07:52.121 "supported_io_types": { 00:07:52.121 "read": true, 00:07:52.121 "write": true, 00:07:52.121 "unmap": true, 00:07:52.121 "flush": false, 00:07:52.121 "reset": true, 00:07:52.121 "nvme_admin": false, 00:07:52.121 "nvme_io": false, 00:07:52.121 "nvme_io_md": false, 00:07:52.121 "write_zeroes": true, 00:07:52.121 "zcopy": false, 00:07:52.121 "get_zone_info": false, 00:07:52.121 "zone_management": false, 00:07:52.121 "zone_append": false, 00:07:52.121 "compare": false, 00:07:52.121 "compare_and_write": false, 00:07:52.121 "abort": false, 00:07:52.121 "seek_hole": true, 00:07:52.121 "seek_data": true, 00:07:52.121 "copy": false, 00:07:52.121 "nvme_iov_md": false 00:07:52.121 }, 00:07:52.121 "driver_specific": { 00:07:52.121 "lvol": { 00:07:52.121 "lvol_store_uuid": "5c298afe-945e-44f2-a53d-4b671c967503", 00:07:52.121 "base_bdev": "aio_bdev", 00:07:52.121 "thin_provision": false, 00:07:52.121 "num_allocated_clusters": 38, 00:07:52.121 "snapshot": false, 00:07:52.121 "clone": false, 00:07:52.121 "esnap_clone": false 00:07:52.121 } 00:07:52.121 } 00:07:52.121 } 00:07:52.121 ] 00:07:52.121 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:52.121 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c298afe-945e-44f2-a53d-4b671c967503 00:07:52.121 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:52.437 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:52.437 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c298afe-945e-44f2-a53d-4b671c967503 00:07:52.437 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:52.437 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:52.437 13:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a4136990-9211-4a4c-ae87-f3e1469c4552 00:07:52.696 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c298afe-945e-44f2-a53d-4b671c967503 00:07:52.696 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:52.956 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.956 00:07:52.956 real 0m17.546s 00:07:52.956 user 0m46.003s 00:07:52.956 sys 0m2.940s 00:07:52.956 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.956 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.956 ************************************ 00:07:52.956 END TEST lvs_grow_dirty 00:07:52.956 ************************************ 00:07:52.956 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:52.956 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:52.956 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:52.956 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:52.956 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:52.956 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:52.956 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:52.956 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:52.956 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:52.956 nvmf_trace.0 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:53.215 rmmod nvme_tcp 00:07:53.215 rmmod nvme_fabrics 00:07:53.215 rmmod nvme_keyring 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1972043 ']' 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1972043 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1972043 ']' 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1972043 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1972043 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1972043' 00:07:53.215 killing process with pid 1972043 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1972043 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1972043 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:53.215 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:53.473 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:53.473 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:53.473 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.473 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.473 13:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.381 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:55.381 00:07:55.381 real 0m44.891s 00:07:55.381 user 1m8.034s 00:07:55.381 sys 0m10.440s 00:07:55.381 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.381 13:15:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.381 ************************************ 00:07:55.381 END TEST nvmf_lvs_grow 00:07:55.381 ************************************ 00:07:55.381 13:15:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:55.381 13:15:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:55.381 13:15:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.381 13:15:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:55.381 ************************************ 00:07:55.381 START TEST nvmf_bdev_io_wait 00:07:55.381 ************************************ 00:07:55.381 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:55.642 * Looking for test storage... 00:07:55.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:55.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.642 --rc genhtml_branch_coverage=1 00:07:55.642 --rc genhtml_function_coverage=1 00:07:55.642 --rc genhtml_legend=1 00:07:55.642 --rc geninfo_all_blocks=1 00:07:55.642 --rc geninfo_unexecuted_blocks=1 00:07:55.642 00:07:55.642 ' 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:55.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.642 --rc genhtml_branch_coverage=1 00:07:55.642 --rc genhtml_function_coverage=1 00:07:55.642 --rc genhtml_legend=1 00:07:55.642 --rc geninfo_all_blocks=1 00:07:55.642 --rc geninfo_unexecuted_blocks=1 00:07:55.642 00:07:55.642 ' 00:07:55.642 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:55.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.643 --rc genhtml_branch_coverage=1 00:07:55.643 --rc genhtml_function_coverage=1 00:07:55.643 --rc genhtml_legend=1 00:07:55.643 --rc geninfo_all_blocks=1 00:07:55.643 --rc geninfo_unexecuted_blocks=1 00:07:55.643 00:07:55.643 ' 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:55.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.643 --rc genhtml_branch_coverage=1 00:07:55.643 --rc genhtml_function_coverage=1 00:07:55.643 --rc genhtml_legend=1 00:07:55.643 --rc geninfo_all_blocks=1 00:07:55.643 --rc geninfo_unexecuted_blocks=1 00:07:55.643 00:07:55.643 ' 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:55.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:55.643 13:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:03.911 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:03.912 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:03.912 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:03.912 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:03.912 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:08:03.912 00:08:03.912 --- 10.0.0.2 ping statistics --- 00:08:03.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.912 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:08:03.912 00:08:03.912 --- 10.0.0.1 ping statistics --- 00:08:03.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.912 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1977060 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1977060 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1977060 ']' 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.912 13:15:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:03.912 [2024-12-06 13:15:49.857587] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:03.912 [2024-12-06 13:15:49.857652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.912 [2024-12-06 13:15:49.958125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.912 [2024-12-06 13:15:50.014545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.912 [2024-12-06 13:15:50.014601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.912 [2024-12-06 13:15:50.014610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.912 [2024-12-06 13:15:50.014618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.913 [2024-12-06 13:15:50.014624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.913 [2024-12-06 13:15:50.016904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.913 [2024-12-06 13:15:50.017065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.913 [2024-12-06 13:15:50.017231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.913 [2024-12-06 13:15:50.017231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.174 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.174 [2024-12-06 13:15:50.808640] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.175 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.175 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:04.175 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.175 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.437 Malloc0 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:04.437 [2024-12-06 13:15:50.874136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1977271 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1977273 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:04.437 { 00:08:04.437 "params": { 00:08:04.437 "name": "Nvme$subsystem", 00:08:04.437 "trtype": "$TEST_TRANSPORT", 00:08:04.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.437 "adrfam": "ipv4", 00:08:04.437 "trsvcid": "$NVMF_PORT", 00:08:04.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.437 "hdgst": ${hdgst:-false}, 00:08:04.437 "ddgst": ${ddgst:-false} 00:08:04.437 }, 00:08:04.437 "method": "bdev_nvme_attach_controller" 00:08:04.437 } 00:08:04.437 EOF 00:08:04.437 )") 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1977275 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:04.437 { 00:08:04.437 "params": { 00:08:04.437 "name": "Nvme$subsystem", 00:08:04.437 "trtype": "$TEST_TRANSPORT", 00:08:04.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.437 "adrfam": "ipv4", 00:08:04.437 "trsvcid": "$NVMF_PORT", 00:08:04.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.437 "hdgst": ${hdgst:-false}, 00:08:04.437 "ddgst": ${ddgst:-false} 00:08:04.437 }, 00:08:04.437 "method": "bdev_nvme_attach_controller" 00:08:04.437 } 00:08:04.437 EOF 00:08:04.437 )") 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1977278 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:04.437 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:04.437 { 00:08:04.437 "params": { 00:08:04.437 "name": "Nvme$subsystem", 00:08:04.437 "trtype": "$TEST_TRANSPORT", 00:08:04.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.437 "adrfam": "ipv4", 00:08:04.437 "trsvcid": "$NVMF_PORT", 00:08:04.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.438 "hdgst": ${hdgst:-false}, 00:08:04.438 "ddgst": ${ddgst:-false} 00:08:04.438 }, 00:08:04.438 "method": "bdev_nvme_attach_controller" 00:08:04.438 } 00:08:04.438 EOF 00:08:04.438 )") 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:04.438 { 00:08:04.438 "params": { 00:08:04.438 "name": "Nvme$subsystem", 00:08:04.438 "trtype": "$TEST_TRANSPORT", 00:08:04.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.438 "adrfam": "ipv4", 00:08:04.438 "trsvcid": "$NVMF_PORT", 00:08:04.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.438 "hdgst": ${hdgst:-false}, 00:08:04.438 "ddgst": ${ddgst:-false} 00:08:04.438 }, 00:08:04.438 "method": "bdev_nvme_attach_controller" 00:08:04.438 } 00:08:04.438 EOF 00:08:04.438 )") 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1977271 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.438 "params": { 00:08:04.438 "name": "Nvme1", 00:08:04.438 "trtype": "tcp", 00:08:04.438 "traddr": "10.0.0.2", 00:08:04.438 "adrfam": "ipv4", 00:08:04.438 "trsvcid": "4420", 00:08:04.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.438 "hdgst": false, 00:08:04.438 "ddgst": false 00:08:04.438 }, 00:08:04.438 "method": "bdev_nvme_attach_controller" 00:08:04.438 }' 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.438 "params": { 00:08:04.438 "name": "Nvme1", 00:08:04.438 "trtype": "tcp", 00:08:04.438 "traddr": "10.0.0.2", 00:08:04.438 "adrfam": "ipv4", 00:08:04.438 "trsvcid": "4420", 00:08:04.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.438 "hdgst": false, 00:08:04.438 "ddgst": false 00:08:04.438 }, 00:08:04.438 "method": "bdev_nvme_attach_controller" 00:08:04.438 }' 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.438 "params": { 00:08:04.438 "name": "Nvme1", 00:08:04.438 "trtype": "tcp", 00:08:04.438 "traddr": "10.0.0.2", 00:08:04.438 "adrfam": "ipv4", 00:08:04.438 "trsvcid": "4420", 00:08:04.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.438 "hdgst": false, 00:08:04.438 "ddgst": false 00:08:04.438 }, 00:08:04.438 "method": "bdev_nvme_attach_controller" 00:08:04.438 }' 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:04.438 13:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.438 "params": { 00:08:04.438 "name": "Nvme1", 00:08:04.438 "trtype": "tcp", 00:08:04.438 "traddr": "10.0.0.2", 00:08:04.438 "adrfam": "ipv4", 00:08:04.438 "trsvcid": "4420", 00:08:04.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.438 "hdgst": false, 00:08:04.438 "ddgst": false 00:08:04.438 }, 00:08:04.438 "method": "bdev_nvme_attach_controller" 00:08:04.438 }' 00:08:04.438 [2024-12-06 13:15:50.932931] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:04.438 [2024-12-06 13:15:50.933005] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:04.438 [2024-12-06 13:15:50.934317] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:04.438 [2024-12-06 13:15:50.934378] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:04.438 [2024-12-06 13:15:50.937069] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:04.438 [2024-12-06 13:15:50.937139] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:04.438 [2024-12-06 13:15:50.948067] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:04.438 [2024-12-06 13:15:50.948132] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:04.699 [2024-12-06 13:15:51.149594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.699 [2024-12-06 13:15:51.189730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:04.699 [2024-12-06 13:15:51.241549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.699 [2024-12-06 13:15:51.280552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:04.699 [2024-12-06 13:15:51.335927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.961 [2024-12-06 13:15:51.379308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:04.961 [2024-12-06 13:15:51.406874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.961 [2024-12-06 13:15:51.444714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:04.961 Running I/O for 1 seconds... 00:08:05.222 Running I/O for 1 seconds... 00:08:05.222 Running I/O for 1 seconds... 00:08:05.222 Running I/O for 1 seconds... 00:08:06.165 7790.00 IOPS, 30.43 MiB/s 00:08:06.165 Latency(us) 00:08:06.165 [2024-12-06T12:15:52.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.165 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:06.165 Nvme1n1 : 1.02 7801.84 30.48 0.00 0.00 16262.85 6744.75 24248.32 00:08:06.165 [2024-12-06T12:15:52.824Z] =================================================================================================================== 00:08:06.165 [2024-12-06T12:15:52.824Z] Total : 7801.84 30.48 0.00 0.00 16262.85 6744.75 24248.32 00:08:06.165 10333.00 IOPS, 40.36 MiB/s [2024-12-06T12:15:52.824Z] 7105.00 IOPS, 27.75 MiB/s 00:08:06.165 Latency(us) 00:08:06.165 [2024-12-06T12:15:52.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.165 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:06.165 Nvme1n1 : 1.01 10390.79 40.59 0.00 0.00 12268.63 6253.23 25559.04 00:08:06.165 [2024-12-06T12:15:52.824Z] =================================================================================================================== 00:08:06.165 [2024-12-06T12:15:52.824Z] Total : 10390.79 40.59 0.00 0.00 12268.63 6253.23 25559.04 00:08:06.165 00:08:06.165 Latency(us) 00:08:06.165 [2024-12-06T12:15:52.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.165 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:06.165 Nvme1n1 : 1.01 7194.55 28.10 0.00 0.00 17728.96 5188.27 38229.33 00:08:06.165 [2024-12-06T12:15:52.824Z] =================================================================================================================== 00:08:06.165 [2024-12-06T12:15:52.824Z] Total : 7194.55 28.10 0.00 0.00 17728.96 5188.27 38229.33 00:08:06.165 181408.00 IOPS, 708.62 MiB/s 00:08:06.165 Latency(us) 00:08:06.165 [2024-12-06T12:15:52.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.165 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:06.165 Nvme1n1 : 1.00 181049.38 707.22 0.00 0.00 702.74 296.96 1966.08 00:08:06.165 [2024-12-06T12:15:52.824Z] =================================================================================================================== 00:08:06.165 [2024-12-06T12:15:52.824Z] Total : 181049.38 707.22 0.00 0.00 702.74 296.96 1966.08 00:08:06.165 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1977273 00:08:06.165 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1977275 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1977278 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.426 rmmod nvme_tcp 00:08:06.426 rmmod nvme_fabrics 00:08:06.426 rmmod nvme_keyring 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1977060 ']' 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1977060 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1977060 ']' 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1977060 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1977060 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1977060' 00:08:06.426 killing process with pid 1977060 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1977060 00:08:06.426 13:15:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1977060 00:08:06.687 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:06.687 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:06.687 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:06.687 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:06.687 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:06.687 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:06.687 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:06.687 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.687 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:06.687 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.687 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.687 13:15:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.613 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.613 00:08:08.613 real 0m13.180s 00:08:08.613 user 0m20.048s 00:08:08.613 sys 0m7.472s 00:08:08.613 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.613 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:08.613 ************************************ 00:08:08.613 END TEST nvmf_bdev_io_wait 00:08:08.613 ************************************ 00:08:08.613 13:15:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:08.613 13:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:08.613 13:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.613 13:15:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.881 ************************************ 00:08:08.881 START TEST nvmf_queue_depth 00:08:08.881 ************************************ 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:08.881 * Looking for test storage... 00:08:08.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:08.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.881 --rc genhtml_branch_coverage=1 00:08:08.881 --rc genhtml_function_coverage=1 00:08:08.881 --rc genhtml_legend=1 00:08:08.881 --rc geninfo_all_blocks=1 00:08:08.881 --rc geninfo_unexecuted_blocks=1 00:08:08.881 00:08:08.881 ' 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:08.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.881 --rc genhtml_branch_coverage=1 00:08:08.881 --rc genhtml_function_coverage=1 00:08:08.881 --rc genhtml_legend=1 00:08:08.881 --rc geninfo_all_blocks=1 00:08:08.881 --rc geninfo_unexecuted_blocks=1 00:08:08.881 00:08:08.881 ' 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:08.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.881 --rc genhtml_branch_coverage=1 00:08:08.881 --rc genhtml_function_coverage=1 00:08:08.881 --rc genhtml_legend=1 00:08:08.881 --rc geninfo_all_blocks=1 00:08:08.881 --rc geninfo_unexecuted_blocks=1 00:08:08.881 00:08:08.881 ' 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:08.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.881 --rc genhtml_branch_coverage=1 00:08:08.881 --rc genhtml_function_coverage=1 00:08:08.881 --rc genhtml_legend=1 00:08:08.881 --rc geninfo_all_blocks=1 00:08:08.881 --rc geninfo_unexecuted_blocks=1 00:08:08.881 00:08:08.881 ' 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.881 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.882 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.142 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:09.142 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:09.142 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:09.142 13:15:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:17.276 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:17.276 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:17.276 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:17.276 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:17.276 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:17.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:08:17.277 00:08:17.277 --- 10.0.0.2 ping statistics --- 00:08:17.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.277 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:08:17.277 00:08:17.277 --- 10.0.0.1 ping statistics --- 00:08:17.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.277 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:17.277 13:16:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1981975 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1981975 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1981975 ']' 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.277 [2024-12-06 13:16:03.091646] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:17.277 [2024-12-06 13:16:03.091710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.277 [2024-12-06 13:16:03.196584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.277 [2024-12-06 13:16:03.247592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.277 [2024-12-06 13:16:03.247642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.277 [2024-12-06 13:16:03.247650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.277 [2024-12-06 13:16:03.247657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.277 [2024-12-06 13:16:03.247663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.277 [2024-12-06 13:16:03.248413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.277 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.538 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.538 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.538 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.538 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.538 [2024-12-06 13:16:03.972630] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.538 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.538 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:17.538 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.538 13:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.538 Malloc0 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.538 [2024-12-06 13:16:04.033942] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.538 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1982188 00:08:17.539 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:17.539 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:17.539 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1982188 /var/tmp/bdevperf.sock 00:08:17.539 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1982188 ']' 00:08:17.539 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:17.539 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.539 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:17.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:17.539 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.539 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.539 [2024-12-06 13:16:04.093515] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:17.539 [2024-12-06 13:16:04.093583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1982188 ] 00:08:17.539 [2024-12-06 13:16:04.184045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.800 [2024-12-06 13:16:04.238355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.372 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.372 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:18.372 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:18.372 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.372 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:18.372 NVMe0n1 00:08:18.372 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.372 13:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:18.632 Running I/O for 10 seconds... 00:08:20.511 8192.00 IOPS, 32.00 MiB/s [2024-12-06T12:16:08.111Z] 8707.50 IOPS, 34.01 MiB/s [2024-12-06T12:16:09.493Z] 9617.00 IOPS, 37.57 MiB/s [2024-12-06T12:16:10.433Z] 10494.50 IOPS, 40.99 MiB/s [2024-12-06T12:16:11.375Z] 11058.00 IOPS, 43.20 MiB/s [2024-12-06T12:16:12.314Z] 11433.17 IOPS, 44.66 MiB/s [2024-12-06T12:16:13.256Z] 11704.57 IOPS, 45.72 MiB/s [2024-12-06T12:16:14.196Z] 11930.12 IOPS, 46.60 MiB/s [2024-12-06T12:16:15.136Z] 12167.00 IOPS, 47.53 MiB/s [2024-12-06T12:16:15.395Z] 12304.00 IOPS, 48.06 MiB/s 00:08:28.736 Latency(us) 00:08:28.736 [2024-12-06T12:16:15.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.736 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:28.736 Verification LBA range: start 0x0 length 0x4000 00:08:28.736 NVMe0n1 : 10.05 12350.53 48.24 0.00 0.00 82621.14 8246.61 77769.39 00:08:28.736 [2024-12-06T12:16:15.395Z] =================================================================================================================== 00:08:28.736 [2024-12-06T12:16:15.395Z] Total : 12350.53 48.24 0.00 0.00 82621.14 8246.61 77769.39 00:08:28.736 { 00:08:28.736 "results": [ 00:08:28.736 { 00:08:28.736 "job": "NVMe0n1", 00:08:28.736 "core_mask": "0x1", 00:08:28.736 "workload": "verify", 00:08:28.736 "status": "finished", 00:08:28.736 "verify_range": { 00:08:28.736 "start": 0, 00:08:28.736 "length": 16384 00:08:28.736 }, 00:08:28.736 "queue_depth": 1024, 00:08:28.736 "io_size": 4096, 00:08:28.736 "runtime": 10.045235, 00:08:28.736 "iops": 12350.532366838606, 00:08:28.736 "mibps": 48.244267057963306, 00:08:28.736 "io_failed": 0, 00:08:28.736 "io_timeout": 0, 00:08:28.736 "avg_latency_us": 82621.14022182098, 00:08:28.736 "min_latency_us": 8246.613333333333, 00:08:28.736 "max_latency_us": 77769.38666666667 00:08:28.736 } 00:08:28.736 ], 00:08:28.736 "core_count": 1 00:08:28.736 } 00:08:28.736 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1982188 00:08:28.736 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1982188 ']' 00:08:28.736 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1982188 00:08:28.736 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1982188 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1982188' 00:08:28.737 killing process with pid 1982188 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1982188 00:08:28.737 Received shutdown signal, test time was about 10.000000 seconds 00:08:28.737 00:08:28.737 Latency(us) 00:08:28.737 [2024-12-06T12:16:15.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.737 [2024-12-06T12:16:15.396Z] =================================================================================================================== 00:08:28.737 [2024-12-06T12:16:15.396Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1982188 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.737 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.737 rmmod nvme_tcp 00:08:28.737 rmmod nvme_fabrics 00:08:28.737 rmmod nvme_keyring 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1981975 ']' 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1981975 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1981975 ']' 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1981975 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1981975 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1981975' 00:08:28.997 killing process with pid 1981975 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1981975 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1981975 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.997 13:16:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:31.539 00:08:31.539 real 0m22.385s 00:08:31.539 user 0m25.459s 00:08:31.539 sys 0m7.173s 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.539 ************************************ 00:08:31.539 END TEST nvmf_queue_depth 00:08:31.539 ************************************ 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.539 ************************************ 00:08:31.539 START TEST nvmf_target_multipath 00:08:31.539 ************************************ 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:31.539 * Looking for test storage... 00:08:31.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:31.539 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:31.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.540 --rc genhtml_branch_coverage=1 00:08:31.540 --rc genhtml_function_coverage=1 00:08:31.540 --rc genhtml_legend=1 00:08:31.540 --rc geninfo_all_blocks=1 00:08:31.540 --rc geninfo_unexecuted_blocks=1 00:08:31.540 00:08:31.540 ' 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:31.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.540 --rc genhtml_branch_coverage=1 00:08:31.540 --rc genhtml_function_coverage=1 00:08:31.540 --rc genhtml_legend=1 00:08:31.540 --rc geninfo_all_blocks=1 00:08:31.540 --rc geninfo_unexecuted_blocks=1 00:08:31.540 00:08:31.540 ' 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:31.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.540 --rc genhtml_branch_coverage=1 00:08:31.540 --rc genhtml_function_coverage=1 00:08:31.540 --rc genhtml_legend=1 00:08:31.540 --rc geninfo_all_blocks=1 00:08:31.540 --rc geninfo_unexecuted_blocks=1 00:08:31.540 00:08:31.540 ' 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:31.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.540 --rc genhtml_branch_coverage=1 00:08:31.540 --rc genhtml_function_coverage=1 00:08:31.540 --rc genhtml_legend=1 00:08:31.540 --rc geninfo_all_blocks=1 00:08:31.540 --rc geninfo_unexecuted_blocks=1 00:08:31.540 00:08:31.540 ' 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:31.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:31.540 13:16:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:31.540 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.540 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.540 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:31.540 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.540 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:31.540 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:31.540 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.540 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:31.540 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:31.541 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:31.541 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.541 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.541 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.541 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:31.541 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:31.541 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:31.541 13:16:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:39.675 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:39.675 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:39.675 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.675 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:39.676 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:08:39.676 00:08:39.676 --- 10.0.0.2 ping statistics --- 00:08:39.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.676 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:08:39.676 00:08:39.676 --- 10.0.0.1 ping statistics --- 00:08:39.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.676 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:39.676 only one NIC for nvmf test 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.676 rmmod nvme_tcp 00:08:39.676 rmmod nvme_fabrics 00:08:39.676 rmmod nvme_keyring 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.676 13:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:41.064 00:08:41.064 real 0m9.941s 00:08:41.064 user 0m2.149s 00:08:41.064 sys 0m5.763s 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.064 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:41.064 ************************************ 00:08:41.064 END TEST nvmf_target_multipath 00:08:41.064 ************************************ 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.326 ************************************ 00:08:41.326 START TEST nvmf_zcopy 00:08:41.326 ************************************ 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:41.326 * Looking for test storage... 00:08:41.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:41.326 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:41.587 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:41.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.588 --rc genhtml_branch_coverage=1 00:08:41.588 --rc genhtml_function_coverage=1 00:08:41.588 --rc genhtml_legend=1 00:08:41.588 --rc geninfo_all_blocks=1 00:08:41.588 --rc geninfo_unexecuted_blocks=1 00:08:41.588 00:08:41.588 ' 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:41.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.588 --rc genhtml_branch_coverage=1 00:08:41.588 --rc genhtml_function_coverage=1 00:08:41.588 --rc genhtml_legend=1 00:08:41.588 --rc geninfo_all_blocks=1 00:08:41.588 --rc geninfo_unexecuted_blocks=1 00:08:41.588 00:08:41.588 ' 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:41.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.588 --rc genhtml_branch_coverage=1 00:08:41.588 --rc genhtml_function_coverage=1 00:08:41.588 --rc genhtml_legend=1 00:08:41.588 --rc geninfo_all_blocks=1 00:08:41.588 --rc geninfo_unexecuted_blocks=1 00:08:41.588 00:08:41.588 ' 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:41.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.588 --rc genhtml_branch_coverage=1 00:08:41.588 --rc genhtml_function_coverage=1 00:08:41.588 --rc genhtml_legend=1 00:08:41.588 --rc geninfo_all_blocks=1 00:08:41.588 --rc geninfo_unexecuted_blocks=1 00:08:41.588 00:08:41.588 ' 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.588 13:16:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.588 13:16:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:49.820 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:49.820 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:49.820 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.820 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:49.821 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:08:49.821 00:08:49.821 --- 10.0.0.2 ping statistics --- 00:08:49.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.821 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:08:49.821 00:08:49.821 --- 10.0.0.1 ping statistics --- 00:08:49.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.821 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1992945 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1992945 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1992945 ']' 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.821 13:16:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.821 [2024-12-06 13:16:35.597787] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:49.821 [2024-12-06 13:16:35.597850] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.821 [2024-12-06 13:16:35.697590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.821 [2024-12-06 13:16:35.747239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.821 [2024-12-06 13:16:35.747309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.821 [2024-12-06 13:16:35.747318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.821 [2024-12-06 13:16:35.747326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.821 [2024-12-06 13:16:35.747332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.821 [2024-12-06 13:16:35.748080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.821 [2024-12-06 13:16:36.462979] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.821 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.084 [2024-12-06 13:16:36.487223] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.084 malloc0 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:50.084 { 00:08:50.084 "params": { 00:08:50.084 "name": "Nvme$subsystem", 00:08:50.084 "trtype": "$TEST_TRANSPORT", 00:08:50.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.084 "adrfam": "ipv4", 00:08:50.084 "trsvcid": "$NVMF_PORT", 00:08:50.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.084 "hdgst": ${hdgst:-false}, 00:08:50.084 "ddgst": ${ddgst:-false} 00:08:50.084 }, 00:08:50.084 "method": "bdev_nvme_attach_controller" 00:08:50.084 } 00:08:50.084 EOF 00:08:50.084 )") 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:50.084 13:16:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:50.084 "params": { 00:08:50.084 "name": "Nvme1", 00:08:50.084 "trtype": "tcp", 00:08:50.084 "traddr": "10.0.0.2", 00:08:50.084 "adrfam": "ipv4", 00:08:50.084 "trsvcid": "4420", 00:08:50.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.084 "hdgst": false, 00:08:50.084 "ddgst": false 00:08:50.084 }, 00:08:50.084 "method": "bdev_nvme_attach_controller" 00:08:50.084 }' 00:08:50.084 [2024-12-06 13:16:36.586375] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:08:50.084 [2024-12-06 13:16:36.586449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1993058 ] 00:08:50.084 [2024-12-06 13:16:36.678278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.084 [2024-12-06 13:16:36.731785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.346 Running I/O for 10 seconds... 00:08:52.306 6515.00 IOPS, 50.90 MiB/s [2024-12-06T12:16:40.347Z] 7308.50 IOPS, 57.10 MiB/s [2024-12-06T12:16:41.287Z] 8157.33 IOPS, 63.73 MiB/s [2024-12-06T12:16:42.229Z] 8580.50 IOPS, 67.04 MiB/s [2024-12-06T12:16:43.170Z] 8841.80 IOPS, 69.08 MiB/s [2024-12-06T12:16:44.114Z] 9010.50 IOPS, 70.39 MiB/s [2024-12-06T12:16:45.057Z] 9127.57 IOPS, 71.31 MiB/s [2024-12-06T12:16:45.994Z] 9218.12 IOPS, 72.02 MiB/s [2024-12-06T12:16:47.376Z] 9290.11 IOPS, 72.58 MiB/s [2024-12-06T12:16:47.376Z] 9346.40 IOPS, 73.02 MiB/s 00:09:00.717 Latency(us) 00:09:00.717 [2024-12-06T12:16:47.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.717 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:00.717 Verification LBA range: start 0x0 length 0x1000 00:09:00.717 Nvme1n1 : 10.01 9348.67 73.04 0.00 0.00 13645.14 2362.03 27088.21 00:09:00.717 [2024-12-06T12:16:47.376Z] =================================================================================================================== 00:09:00.717 [2024-12-06T12:16:47.376Z] Total : 9348.67 73.04 0.00 0.00 13645.14 2362.03 27088.21 00:09:00.717 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1995078 00:09:00.717 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:00.717 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.717 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:00.717 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:00.717 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:00.717 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:00.717 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:00.717 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:00.717 { 00:09:00.717 "params": { 00:09:00.717 "name": "Nvme$subsystem", 00:09:00.717 "trtype": "$TEST_TRANSPORT", 00:09:00.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.717 "adrfam": "ipv4", 00:09:00.717 "trsvcid": "$NVMF_PORT", 00:09:00.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.717 "hdgst": ${hdgst:-false}, 00:09:00.717 "ddgst": ${ddgst:-false} 00:09:00.717 }, 00:09:00.717 "method": "bdev_nvme_attach_controller" 00:09:00.717 } 00:09:00.717 EOF 00:09:00.717 )") 00:09:00.717 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:00.717 [2024-12-06 13:16:47.091474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.717 [2024-12-06 13:16:47.091500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.717 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:00.717 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:00.717 13:16:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:00.717 "params": { 00:09:00.717 "name": "Nvme1", 00:09:00.717 "trtype": "tcp", 00:09:00.717 "traddr": "10.0.0.2", 00:09:00.717 "adrfam": "ipv4", 00:09:00.717 "trsvcid": "4420", 00:09:00.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.717 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.717 "hdgst": false, 00:09:00.717 "ddgst": false 00:09:00.717 }, 00:09:00.717 "method": "bdev_nvme_attach_controller" 00:09:00.717 }' 00:09:00.717 [2024-12-06 13:16:47.103475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.717 [2024-12-06 13:16:47.103485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.717 [2024-12-06 13:16:47.115504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.717 [2024-12-06 13:16:47.115514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.717 [2024-12-06 13:16:47.127532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.717 [2024-12-06 13:16:47.127541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.717 [2024-12-06 13:16:47.139561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.717 [2024-12-06 13:16:47.139570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.717 [2024-12-06 13:16:47.141968] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:09:00.717 [2024-12-06 13:16:47.142021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1995078 ] 00:09:00.717 [2024-12-06 13:16:47.151593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.717 [2024-12-06 13:16:47.151601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.717 [2024-12-06 13:16:47.163625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.717 [2024-12-06 13:16:47.163633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.717 [2024-12-06 13:16:47.175656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.717 [2024-12-06 13:16:47.175664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.717 [2024-12-06 13:16:47.187686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.717 [2024-12-06 13:16:47.187695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.717 [2024-12-06 13:16:47.199740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.717 [2024-12-06 13:16:47.199749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.211771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.211780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.223802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.223810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.225101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.718 [2024-12-06 13:16:47.235834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.235843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.247865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.247874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.254632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.718 [2024-12-06 13:16:47.259895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.259904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.271931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.271943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.283960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.283974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.295990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.296000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.308019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.308028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.320048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.320056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.332090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.332107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.344115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.344127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.356147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.356158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.718 [2024-12-06 13:16:47.368177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.718 [2024-12-06 13:16:47.368185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.380209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.380218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.392240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.392248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.404275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.404286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.416307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.416317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.428339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.428347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.440372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.440380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.452405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.452415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.464435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.464444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.476474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.476483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.488506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.488515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.500533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.500543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.512563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.512572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.524595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.524603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.536629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.536638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.548665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.548679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.560695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.560708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 Running I/O for 5 seconds... 00:09:00.979 [2024-12-06 13:16:47.576466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.576482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.589727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.589744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.603034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.603050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.616230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.616246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.979 [2024-12-06 13:16:47.629941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.979 [2024-12-06 13:16:47.629963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.239 [2024-12-06 13:16:47.642813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.239 [2024-12-06 13:16:47.642829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.239 [2024-12-06 13:16:47.656422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.239 [2024-12-06 13:16:47.656437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.239 [2024-12-06 13:16:47.669950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.239 [2024-12-06 13:16:47.669965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.239 [2024-12-06 13:16:47.682437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.239 [2024-12-06 13:16:47.682453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.239 [2024-12-06 13:16:47.696063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.239 [2024-12-06 13:16:47.696080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.239 [2024-12-06 13:16:47.709054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.239 [2024-12-06 13:16:47.709069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.239 [2024-12-06 13:16:47.721809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.239 [2024-12-06 13:16:47.721825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.239 [2024-12-06 13:16:47.734198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.240 [2024-12-06 13:16:47.734214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.240 [2024-12-06 13:16:47.746673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.240 [2024-12-06 13:16:47.746688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.240 [2024-12-06 13:16:47.759657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.240 [2024-12-06 13:16:47.759672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.240 [2024-12-06 13:16:47.772934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.240 [2024-12-06 13:16:47.772949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.240 [2024-12-06 13:16:47.785688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.240 [2024-12-06 13:16:47.785702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.240 [2024-12-06 13:16:47.799223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.240 [2024-12-06 13:16:47.799239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.240 [2024-12-06 13:16:47.812498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.240 [2024-12-06 13:16:47.812513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.240 [2024-12-06 13:16:47.825876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.240 [2024-12-06 13:16:47.825891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.240 [2024-12-06 13:16:47.839459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.240 [2024-12-06 13:16:47.839474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.240 [2024-12-06 13:16:47.851705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.240 [2024-12-06 13:16:47.851720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.240 [2024-12-06 13:16:47.864905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.240 [2024-12-06 13:16:47.864921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.240 [2024-12-06 13:16:47.877252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.240 [2024-12-06 13:16:47.877271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.240 [2024-12-06 13:16:47.890516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.240 [2024-12-06 13:16:47.890531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.499 [2024-12-06 13:16:47.902790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.499 [2024-12-06 13:16:47.902805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.499 [2024-12-06 13:16:47.916428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.499 [2024-12-06 13:16:47.916443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.499 [2024-12-06 13:16:47.929564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.499 [2024-12-06 13:16:47.929580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.499 [2024-12-06 13:16:47.942396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.499 [2024-12-06 13:16:47.942411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.499 [2024-12-06 13:16:47.955992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.499 [2024-12-06 13:16:47.956007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.499 [2024-12-06 13:16:47.969243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.499 [2024-12-06 13:16:47.969258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.499 [2024-12-06 13:16:47.981930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.499 [2024-12-06 13:16:47.981944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.499 [2024-12-06 13:16:47.995439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.499 [2024-12-06 13:16:47.995458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.499 [2024-12-06 13:16:48.007782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.499 [2024-12-06 13:16:48.007797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.499 [2024-12-06 13:16:48.020992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.499 [2024-12-06 13:16:48.021007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.499 [2024-12-06 13:16:48.033290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.499 [2024-12-06 13:16:48.033304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.499 [2024-12-06 13:16:48.046629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.499 [2024-12-06 13:16:48.046644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.500 [2024-12-06 13:16:48.059959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.500 [2024-12-06 13:16:48.059974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.500 [2024-12-06 13:16:48.073710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.500 [2024-12-06 13:16:48.073725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.500 [2024-12-06 13:16:48.087558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.500 [2024-12-06 13:16:48.087572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.500 [2024-12-06 13:16:48.100064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.500 [2024-12-06 13:16:48.100079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.500 [2024-12-06 13:16:48.113505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.500 [2024-12-06 13:16:48.113520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.500 [2024-12-06 13:16:48.126969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.500 [2024-12-06 13:16:48.126988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.500 [2024-12-06 13:16:48.140233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.500 [2024-12-06 13:16:48.140248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.500 [2024-12-06 13:16:48.153793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.500 [2024-12-06 13:16:48.153808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.167551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.167567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.180157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.180171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.193770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.193785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.206432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.206448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.219929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.219943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.233519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.233534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.246305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.246320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.259280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.259295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.272531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.272546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.285853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.285868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.298538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.298553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.311750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.311765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.325568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.325583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.338979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.338995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.352317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.352332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.365908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.365924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.378906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.378922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.392234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.392249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.760 [2024-12-06 13:16:48.405865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.760 [2024-12-06 13:16:48.405880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.419339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.419354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.432558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.432572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.446001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.446016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.459429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.459444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.472556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.472571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.485421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.485436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.498184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.498199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.510571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.510586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.523784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.523799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.536897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.536911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.550200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.550215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.563523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.563537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 19285.00 IOPS, 150.66 MiB/s [2024-12-06T12:16:48.680Z] [2024-12-06 13:16:48.577067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.577082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.590290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.590305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.604208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.604223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.617421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.617435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.630783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.630799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.644345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.644361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.657229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.657244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.021 [2024-12-06 13:16:48.669591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.021 [2024-12-06 13:16:48.669606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.682289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.682305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.695300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.695316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.708194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.708209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.721884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.721899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.735108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.735124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.748707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.748722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.762238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.762253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.776006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.776021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.788569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.788585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.801853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.801868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.814443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.814463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.827878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.827894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.841032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.841047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.854503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.854518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.867085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.867101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.880431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.880447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.893738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.893754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.906483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.906498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.919001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.919017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.282 [2024-12-06 13:16:48.932393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.282 [2024-12-06 13:16:48.932409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:48.945917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:48.945933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:48.959680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:48.959695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:48.972158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:48.972173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:48.985000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:48.985015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:48.998244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:48.998260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.011496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.011512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.024723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.024739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.037496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.037512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.050657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.050673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.064174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.064190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.077754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.077770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.091100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.091115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.103864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.103879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.116165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.116184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.129366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.129381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.142714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.142729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.155855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.155870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.169358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.169374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.182265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.182281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.543 [2024-12-06 13:16:49.195373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.543 [2024-12-06 13:16:49.195388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.208669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.208685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.222405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.222421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.235676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.235691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.248420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.248435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.261544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.261560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.274657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.274672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.286932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.286948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.299561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.299576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.313121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.313137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.325216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.325232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.338635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.338650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.351535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.351550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.364923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.364942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.378280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.378296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.391644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.391660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.405178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.405193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.419036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.419052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.432389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.432404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.445019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.445034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.804 [2024-12-06 13:16:49.458081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.804 [2024-12-06 13:16:49.458095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.471277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.471292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.484205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.484220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.497419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.497434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.510712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.510727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.524083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.524098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.537577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.537592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.550949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.550964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.564317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.564332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 19339.50 IOPS, 151.09 MiB/s [2024-12-06T12:16:49.724Z] [2024-12-06 13:16:49.577023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.577038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.590090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.590105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.603341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.603356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.616311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.616329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.629663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.629678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.642660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.642675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.656371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.656387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.668637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.668652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.682152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.682166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.694408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.694423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.706915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.706930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.065 [2024-12-06 13:16:49.720340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.065 [2024-12-06 13:16:49.720355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.733183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.733198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.746685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.746699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.760545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.760560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.772640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.772654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.785263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.785277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.797520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.797535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.810895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.810909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.824353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.824367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.837230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.837244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.850534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.850549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.863531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.863546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.876639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.876654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.889830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.889845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.903021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.903037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.916095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.916110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.929550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.929565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.942827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.942842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.956015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.956030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.969277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.969292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.326 [2024-12-06 13:16:49.982296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.326 [2024-12-06 13:16:49.982311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:49.995531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:49.995546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.008393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.008409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.021443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.021462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.034798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.034813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.047753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.047768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.060914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.060929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.074453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.074471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.087917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.087932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.100846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.100861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.113953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.113967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.127086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.127101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.140296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.140311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.153494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.153509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.166990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.167005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.179515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.179531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.193164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.193179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.206417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.206432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.219696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.219711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.587 [2024-12-06 13:16:50.232078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.587 [2024-12-06 13:16:50.232093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.245542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.245557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.258893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.258907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.271660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.271675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.284898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.284912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.298153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.298168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.311601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.311616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.324058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.324072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.337199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.337214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.351006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.351021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.363945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.363961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.377328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.377344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.389721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.389736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.402855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.402870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.415430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.415445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.427493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.427508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.440846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.440861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.453888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.848 [2024-12-06 13:16:50.453903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.848 [2024-12-06 13:16:50.467211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.849 [2024-12-06 13:16:50.467226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.849 [2024-12-06 13:16:50.480672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.849 [2024-12-06 13:16:50.480688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.849 [2024-12-06 13:16:50.492993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.849 [2024-12-06 13:16:50.493009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.109 [2024-12-06 13:16:50.505930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.109 [2024-12-06 13:16:50.505947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.109 [2024-12-06 13:16:50.518999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.109 [2024-12-06 13:16:50.519015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.109 [2024-12-06 13:16:50.531307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.109 [2024-12-06 13:16:50.531323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.109 [2024-12-06 13:16:50.544903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.109 [2024-12-06 13:16:50.544918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.109 [2024-12-06 13:16:50.557667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.109 [2024-12-06 13:16:50.557681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.109 [2024-12-06 13:16:50.570438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.109 [2024-12-06 13:16:50.570458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.109 19388.67 IOPS, 151.47 MiB/s [2024-12-06T12:16:50.768Z] [2024-12-06 13:16:50.583308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.109 [2024-12-06 13:16:50.583323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.109 [2024-12-06 13:16:50.596777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.109 [2024-12-06 13:16:50.596795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.109 [2024-12-06 13:16:50.610270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.110 [2024-12-06 13:16:50.610285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.110 [2024-12-06 13:16:50.623873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.110 [2024-12-06 13:16:50.623888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.110 [2024-12-06 13:16:50.637089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.110 [2024-12-06 13:16:50.637104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.110 [2024-12-06 13:16:50.649951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.110 [2024-12-06 13:16:50.649966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.110 [2024-12-06 13:16:50.662913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.110 [2024-12-06 13:16:50.662929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.110 [2024-12-06 13:16:50.676478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.110 [2024-12-06 13:16:50.676494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.110 [2024-12-06 13:16:50.689073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.110 [2024-12-06 13:16:50.689088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.110 [2024-12-06 13:16:50.702278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.110 [2024-12-06 13:16:50.702293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.110 [2024-12-06 13:16:50.715444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.110 [2024-12-06 13:16:50.715466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.110 [2024-12-06 13:16:50.728801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.110 [2024-12-06 13:16:50.728816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.110 [2024-12-06 13:16:50.741860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.110 [2024-12-06 13:16:50.741876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.110 [2024-12-06 13:16:50.755219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.110 [2024-12-06 13:16:50.755234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-12-06 13:16:50.768578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-12-06 13:16:50.768594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-12-06 13:16:50.781151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-12-06 13:16:50.781166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-12-06 13:16:50.794440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-12-06 13:16:50.794461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-12-06 13:16:50.807862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-12-06 13:16:50.807877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-12-06 13:16:50.820849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-12-06 13:16:50.820864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-12-06 13:16:50.834240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-12-06 13:16:50.834255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-12-06 13:16:50.847609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-12-06 13:16:50.847628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-12-06 13:16:50.860556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-12-06 13:16:50.860571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.370 [2024-12-06 13:16:50.873540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.370 [2024-12-06 13:16:50.873555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.371 [2024-12-06 13:16:50.886855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.371 [2024-12-06 13:16:50.886870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.371 [2024-12-06 13:16:50.899806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.371 [2024-12-06 13:16:50.899822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.371 [2024-12-06 13:16:50.913000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.371 [2024-12-06 13:16:50.913015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.371 [2024-12-06 13:16:50.926287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.371 [2024-12-06 13:16:50.926303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.371 [2024-12-06 13:16:50.939749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.371 [2024-12-06 13:16:50.939764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.371 [2024-12-06 13:16:50.953164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.371 [2024-12-06 13:16:50.953179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.371 [2024-12-06 13:16:50.966317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.371 [2024-12-06 13:16:50.966333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.371 [2024-12-06 13:16:50.979482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.371 [2024-12-06 13:16:50.979497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.371 [2024-12-06 13:16:50.992844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.371 [2024-12-06 13:16:50.992860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.371 [2024-12-06 13:16:51.005334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.371 [2024-12-06 13:16:51.005349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.371 [2024-12-06 13:16:51.018906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.371 [2024-12-06 13:16:51.018922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.032371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.032386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.045618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.045633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.058860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.058875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.072088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.072104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.084691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.084707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.097056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.097075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.110048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.110064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.123662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.123677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.136085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.136100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.149112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.149127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.161209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.161224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.174499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.174514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.187475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.187489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.631 [2024-12-06 13:16:51.200548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.631 [2024-12-06 13:16:51.200563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.632 [2024-12-06 13:16:51.213943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.632 [2024-12-06 13:16:51.213958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.632 [2024-12-06 13:16:51.227152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.632 [2024-12-06 13:16:51.227166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.632 [2024-12-06 13:16:51.240730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.632 [2024-12-06 13:16:51.240744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.632 [2024-12-06 13:16:51.253054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.632 [2024-12-06 13:16:51.253069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.632 [2024-12-06 13:16:51.265697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.632 [2024-12-06 13:16:51.265712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.632 [2024-12-06 13:16:51.278639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.632 [2024-12-06 13:16:51.278653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.892 [2024-12-06 13:16:51.291207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.892 [2024-12-06 13:16:51.291222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.304059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.304074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.317663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.317678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.330188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.330203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.343279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.343300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.356679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.356694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.369388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.369403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.382793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.382807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.395366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.395380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.408814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.408829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.422280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.422295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.434647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.434662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.448288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.448304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.460914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.460929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.474395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.474410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.487825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.487840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.501110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.501124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.513535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.513550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.526054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.526069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.893 [2024-12-06 13:16:51.538678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.893 [2024-12-06 13:16:51.538693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.552092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.552107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.564555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.564570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 19388.00 IOPS, 151.47 MiB/s [2024-12-06T12:16:51.812Z] [2024-12-06 13:16:51.577807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.577821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.590376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.590390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.603538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.603553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.616651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.616666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.629590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.629605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.641882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.641896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.654592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.654606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.667953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.667968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.680663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.680678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.693912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.693927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.707301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.707316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.719948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.719962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.733045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.733060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.746318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.746333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.759298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.759312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.772396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.772412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.785448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.153 [2024-12-06 13:16:51.785468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.153 [2024-12-06 13:16:51.798447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.154 [2024-12-06 13:16:51.798466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.414 [2024-12-06 13:16:51.811549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.414 [2024-12-06 13:16:51.811565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.414 [2024-12-06 13:16:51.824634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.414 [2024-12-06 13:16:51.824649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.414 [2024-12-06 13:16:51.837908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.414 [2024-12-06 13:16:51.837923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.414 [2024-12-06 13:16:51.850522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.414 [2024-12-06 13:16:51.850537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.414 [2024-12-06 13:16:51.864013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.414 [2024-12-06 13:16:51.864028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.414 [2024-12-06 13:16:51.876841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.414 [2024-12-06 13:16:51.876857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.414 [2024-12-06 13:16:51.890169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.414 [2024-12-06 13:16:51.890184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.414 [2024-12-06 13:16:51.903847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:51.903862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.415 [2024-12-06 13:16:51.916173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:51.916188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.415 [2024-12-06 13:16:51.929558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:51.929572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.415 [2024-12-06 13:16:51.942528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:51.942543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.415 [2024-12-06 13:16:51.954615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:51.954630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.415 [2024-12-06 13:16:51.968362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:51.968377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.415 [2024-12-06 13:16:51.981149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:51.981164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.415 [2024-12-06 13:16:51.994035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:51.994050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.415 [2024-12-06 13:16:52.007406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:52.007421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.415 [2024-12-06 13:16:52.020115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:52.020129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.415 [2024-12-06 13:16:52.032913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:52.032927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.415 [2024-12-06 13:16:52.046379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:52.046394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.415 [2024-12-06 13:16:52.058651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:52.058666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.415 [2024-12-06 13:16:52.071917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.415 [2024-12-06 13:16:52.071936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.085114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.085129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.098474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.098489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.111772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.111788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.124979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.124994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.137735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.137751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.150981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.150996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.163991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.164007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.177327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.177342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.190616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.190631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.203839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.203855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.217525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.217541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.230285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.230300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.243526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.243541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.256395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.256410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.675 [2024-12-06 13:16:52.269281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.675 [2024-12-06 13:16:52.269296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.676 [2024-12-06 13:16:52.282706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.676 [2024-12-06 13:16:52.282722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.676 [2024-12-06 13:16:52.295655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.676 [2024-12-06 13:16:52.295670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.676 [2024-12-06 13:16:52.309221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.676 [2024-12-06 13:16:52.309237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.676 [2024-12-06 13:16:52.322107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.676 [2024-12-06 13:16:52.322126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.335027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.335042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.348193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.348208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.360759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.360774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.373885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.373901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.386229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.386244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.399028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.399043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.411966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.411982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.424743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.424758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.437853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.437869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.450859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.450874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.464323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.464338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.477402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.477418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.491002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.491018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.503583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.503599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.516462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.516478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.529726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.529741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.543089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.543104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.556598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.556613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 [2024-12-06 13:16:52.569081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.569100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 19393.60 IOPS, 151.51 MiB/s [2024-12-06T12:16:52.595Z] [2024-12-06 13:16:52.580538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.580553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.936 00:09:05.936 Latency(us) 00:09:05.936 [2024-12-06T12:16:52.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.936 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:05.936 Nvme1n1 : 5.01 19394.81 151.52 0.00 0.00 6593.85 2798.93 17148.59 00:09:05.936 [2024-12-06T12:16:52.595Z] =================================================================================================================== 00:09:05.936 [2024-12-06T12:16:52.595Z] Total : 19394.81 151.52 0.00 0.00 6593.85 2798.93 17148.59 00:09:05.936 [2024-12-06 13:16:52.591096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.936 [2024-12-06 13:16:52.591108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.197 [2024-12-06 13:16:52.603138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.197 [2024-12-06 13:16:52.603154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.197 [2024-12-06 13:16:52.615162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.197 [2024-12-06 13:16:52.615173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.197 [2024-12-06 13:16:52.627193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.197 [2024-12-06 13:16:52.627205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.197 [2024-12-06 13:16:52.639221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.197 [2024-12-06 13:16:52.639231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.197 [2024-12-06 13:16:52.651249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.197 [2024-12-06 13:16:52.651259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.197 [2024-12-06 13:16:52.663282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.197 [2024-12-06 13:16:52.663293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.197 [2024-12-06 13:16:52.675313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.197 [2024-12-06 13:16:52.675322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1995078) - No such process 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1995078 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.197 delay0 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.197 13:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:06.197 [2024-12-06 13:16:52.843120] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:14.335 Initializing NVMe Controllers 00:09:14.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:14.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:14.335 Initialization complete. Launching workers. 00:09:14.335 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 241, failed: 33134 00:09:14.335 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33247, failed to submit 128 00:09:14.335 success 33160, unsuccessful 87, failed 0 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.335 rmmod nvme_tcp 00:09:14.335 rmmod nvme_fabrics 00:09:14.335 rmmod nvme_keyring 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1992945 ']' 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1992945 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1992945 ']' 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1992945 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.335 13:16:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1992945 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1992945' 00:09:14.335 killing process with pid 1992945 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1992945 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1992945 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.335 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.336 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.336 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.336 13:17:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.716 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:15.716 00:09:15.716 real 0m34.457s 00:09:15.716 user 0m45.273s 00:09:15.716 sys 0m12.023s 00:09:15.716 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.716 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.716 ************************************ 00:09:15.716 END TEST nvmf_zcopy 00:09:15.716 ************************************ 00:09:15.716 13:17:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:15.716 13:17:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:15.716 13:17:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.716 13:17:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.716 ************************************ 00:09:15.716 START TEST nvmf_nmic 00:09:15.716 ************************************ 00:09:15.716 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:15.976 * Looking for test storage... 00:09:15.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.976 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:15.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.977 --rc genhtml_branch_coverage=1 00:09:15.977 --rc genhtml_function_coverage=1 00:09:15.977 --rc genhtml_legend=1 00:09:15.977 --rc geninfo_all_blocks=1 00:09:15.977 --rc geninfo_unexecuted_blocks=1 00:09:15.977 00:09:15.977 ' 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:15.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.977 --rc genhtml_branch_coverage=1 00:09:15.977 --rc genhtml_function_coverage=1 00:09:15.977 --rc genhtml_legend=1 00:09:15.977 --rc geninfo_all_blocks=1 00:09:15.977 --rc geninfo_unexecuted_blocks=1 00:09:15.977 00:09:15.977 ' 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:15.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.977 --rc genhtml_branch_coverage=1 00:09:15.977 --rc genhtml_function_coverage=1 00:09:15.977 --rc genhtml_legend=1 00:09:15.977 --rc geninfo_all_blocks=1 00:09:15.977 --rc geninfo_unexecuted_blocks=1 00:09:15.977 00:09:15.977 ' 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:15.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.977 --rc genhtml_branch_coverage=1 00:09:15.977 --rc genhtml_function_coverage=1 00:09:15.977 --rc genhtml_legend=1 00:09:15.977 --rc geninfo_all_blocks=1 00:09:15.977 --rc geninfo_unexecuted_blocks=1 00:09:15.977 00:09:15.977 ' 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:15.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.977 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:15.978 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:15.978 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:15.978 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.978 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.978 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.978 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:15.978 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:15.978 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:15.978 13:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:24.120 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:24.120 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.120 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:24.121 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:24.121 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.121 13:17:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:09:24.121 00:09:24.121 --- 10.0.0.2 ping statistics --- 00:09:24.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.121 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:09:24.121 00:09:24.121 --- 10.0.0.1 ping statistics --- 00:09:24.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.121 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2002012 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2002012 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2002012 ']' 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.121 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.121 [2024-12-06 13:17:10.113184] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:09:24.121 [2024-12-06 13:17:10.113251] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.121 [2024-12-06 13:17:10.215176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.121 [2024-12-06 13:17:10.269965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.121 [2024-12-06 13:17:10.270023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.121 [2024-12-06 13:17:10.270032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.121 [2024-12-06 13:17:10.270040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.121 [2024-12-06 13:17:10.270046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.121 [2024-12-06 13:17:10.272089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.121 [2024-12-06 13:17:10.272249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.121 [2024-12-06 13:17:10.272403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.121 [2024-12-06 13:17:10.272404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.382 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.382 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:24.382 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.382 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.382 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.382 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.382 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.382 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.382 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.382 [2024-12-06 13:17:10.986204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.382 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.382 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:24.382 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.382 13:17:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.382 Malloc0 00:09:24.382 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.382 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:24.382 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.382 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.382 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.643 [2024-12-06 13:17:11.059297] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:24.643 test case1: single bdev can't be used in multiple subsystems 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.643 [2024-12-06 13:17:11.095073] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:24.643 [2024-12-06 13:17:11.095099] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:24.643 [2024-12-06 13:17:11.095108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.643 request: 00:09:24.643 { 00:09:24.643 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:24.643 "namespace": { 00:09:24.643 "bdev_name": "Malloc0", 00:09:24.643 "no_auto_visible": false, 00:09:24.643 "hide_metadata": false 00:09:24.643 }, 00:09:24.643 "method": "nvmf_subsystem_add_ns", 00:09:24.643 "req_id": 1 00:09:24.643 } 00:09:24.643 Got JSON-RPC error response 00:09:24.643 response: 00:09:24.643 { 00:09:24.643 "code": -32602, 00:09:24.643 "message": "Invalid parameters" 00:09:24.643 } 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:24.643 Adding namespace failed - expected result. 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:24.643 test case2: host connect to nvmf target in multiple paths 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:24.643 [2024-12-06 13:17:11.107325] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.643 13:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:26.027 13:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:27.411 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:27.411 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:27.411 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.411 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:27.411 13:17:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:29.954 13:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:29.954 13:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:29.954 13:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.954 13:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:29.954 13:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.955 13:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:29.955 13:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:29.955 [global] 00:09:29.955 thread=1 00:09:29.955 invalidate=1 00:09:29.955 rw=write 00:09:29.955 time_based=1 00:09:29.955 runtime=1 00:09:29.955 ioengine=libaio 00:09:29.955 direct=1 00:09:29.955 bs=4096 00:09:29.955 iodepth=1 00:09:29.955 norandommap=0 00:09:29.955 numjobs=1 00:09:29.955 00:09:29.955 verify_dump=1 00:09:29.955 verify_backlog=512 00:09:29.955 verify_state_save=0 00:09:29.955 do_verify=1 00:09:29.955 verify=crc32c-intel 00:09:29.955 [job0] 00:09:29.955 filename=/dev/nvme0n1 00:09:29.955 Could not set queue depth (nvme0n1) 00:09:29.955 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.955 fio-3.35 00:09:29.955 Starting 1 thread 00:09:31.342 00:09:31.342 job0: (groupid=0, jobs=1): err= 0: pid=2003327: Fri Dec 6 13:17:17 2024 00:09:31.342 read: IOPS=652, BW=2609KiB/s (2672kB/s)(2612KiB/1001msec) 00:09:31.342 slat (nsec): min=6750, max=62433, avg=24735.58, stdev=7076.16 00:09:31.342 clat (usec): min=329, max=1128, avg=790.88, stdev=105.85 00:09:31.342 lat (usec): min=356, max=1154, avg=815.61, stdev=106.66 00:09:31.342 clat percentiles (usec): 00:09:31.342 | 1.00th=[ 482], 5.00th=[ 611], 10.00th=[ 644], 20.00th=[ 693], 00:09:31.342 | 30.00th=[ 742], 40.00th=[ 775], 50.00th=[ 807], 60.00th=[ 840], 00:09:31.342 | 70.00th=[ 857], 80.00th=[ 881], 90.00th=[ 914], 95.00th=[ 938], 00:09:31.342 | 99.00th=[ 963], 99.50th=[ 963], 99.90th=[ 1123], 99.95th=[ 1123], 00:09:31.342 | 99.99th=[ 1123] 00:09:31.342 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:31.342 slat (usec): min=9, max=26657, avg=57.80, stdev=832.10 00:09:31.342 clat (usec): min=122, max=690, avg=386.34, stdev=100.29 00:09:31.342 lat (usec): min=133, max=27298, avg=444.13, stdev=846.42 00:09:31.342 clat percentiles (usec): 00:09:31.342 | 1.00th=[ 182], 5.00th=[ 227], 10.00th=[ 281], 20.00th=[ 293], 00:09:31.342 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 404], 60.00th=[ 416], 00:09:31.342 | 70.00th=[ 449], 80.00th=[ 461], 90.00th=[ 529], 95.00th=[ 553], 00:09:31.342 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 668], 99.95th=[ 693], 00:09:31.342 | 99.99th=[ 693] 00:09:31.342 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:31.342 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:31.342 lat (usec) : 250=5.31%, 500=48.60%, 750=19.26%, 1000=26.77% 00:09:31.342 lat (msec) : 2=0.06% 00:09:31.342 cpu : usr=3.00%, sys=4.50%, ctx=1680, majf=0, minf=1 00:09:31.342 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.342 issued rwts: total=653,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.342 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.342 00:09:31.342 Run status group 0 (all jobs): 00:09:31.342 READ: bw=2609KiB/s (2672kB/s), 2609KiB/s-2609KiB/s (2672kB/s-2672kB/s), io=2612KiB (2675kB), run=1001-1001msec 00:09:31.342 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:09:31.342 00:09:31.342 Disk stats (read/write): 00:09:31.342 nvme0n1: ios=554/1024, merge=0/0, ticks=1374/404, in_queue=1778, util=98.90% 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:31.342 rmmod nvme_tcp 00:09:31.342 rmmod nvme_fabrics 00:09:31.342 rmmod nvme_keyring 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2002012 ']' 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2002012 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2002012 ']' 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2002012 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2002012 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2002012' 00:09:31.342 killing process with pid 2002012 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2002012 00:09:31.342 13:17:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2002012 00:09:31.603 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:31.603 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:31.603 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:31.603 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:31.603 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:31.603 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:31.603 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:31.603 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:31.603 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:31.603 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.603 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.603 13:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.647 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:33.647 00:09:33.647 real 0m17.759s 00:09:33.647 user 0m50.465s 00:09:33.647 sys 0m6.571s 00:09:33.647 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.647 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.647 ************************************ 00:09:33.647 END TEST nvmf_nmic 00:09:33.647 ************************************ 00:09:33.647 13:17:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:33.647 13:17:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:33.647 13:17:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.647 13:17:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.647 ************************************ 00:09:33.647 START TEST nvmf_fio_target 00:09:33.647 ************************************ 00:09:33.647 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:33.647 * Looking for test storage... 00:09:33.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.647 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:33.647 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:33.647 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.915 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:33.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.915 --rc genhtml_branch_coverage=1 00:09:33.915 --rc genhtml_function_coverage=1 00:09:33.916 --rc genhtml_legend=1 00:09:33.916 --rc geninfo_all_blocks=1 00:09:33.916 --rc geninfo_unexecuted_blocks=1 00:09:33.916 00:09:33.916 ' 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:33.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.916 --rc genhtml_branch_coverage=1 00:09:33.916 --rc genhtml_function_coverage=1 00:09:33.916 --rc genhtml_legend=1 00:09:33.916 --rc geninfo_all_blocks=1 00:09:33.916 --rc geninfo_unexecuted_blocks=1 00:09:33.916 00:09:33.916 ' 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:33.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.916 --rc genhtml_branch_coverage=1 00:09:33.916 --rc genhtml_function_coverage=1 00:09:33.916 --rc genhtml_legend=1 00:09:33.916 --rc geninfo_all_blocks=1 00:09:33.916 --rc geninfo_unexecuted_blocks=1 00:09:33.916 00:09:33.916 ' 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:33.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.916 --rc genhtml_branch_coverage=1 00:09:33.916 --rc genhtml_function_coverage=1 00:09:33.916 --rc genhtml_legend=1 00:09:33.916 --rc geninfo_all_blocks=1 00:09:33.916 --rc geninfo_unexecuted_blocks=1 00:09:33.916 00:09:33.916 ' 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:33.916 13:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:42.153 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:42.153 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:42.153 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:42.153 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.153 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:42.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:09:42.154 00:09:42.154 --- 10.0.0.2 ping statistics --- 00:09:42.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.154 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:09:42.154 00:09:42.154 --- 10.0.0.1 ping statistics --- 00:09:42.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.154 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2007985 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2007985 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2007985 ']' 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.154 13:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.154 [2024-12-06 13:17:27.986004] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:09:42.154 [2024-12-06 13:17:27.986069] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.154 [2024-12-06 13:17:28.085280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.154 [2024-12-06 13:17:28.139397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.154 [2024-12-06 13:17:28.139462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.154 [2024-12-06 13:17:28.139472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.154 [2024-12-06 13:17:28.139479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.154 [2024-12-06 13:17:28.139486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.154 [2024-12-06 13:17:28.141860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.154 [2024-12-06 13:17:28.142023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.154 [2024-12-06 13:17:28.142167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.154 [2024-12-06 13:17:28.142167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.415 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.415 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:42.415 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.415 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.415 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.415 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.415 13:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:42.415 [2024-12-06 13:17:29.024753] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.415 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.677 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:42.677 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:42.938 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:42.938 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.200 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:43.200 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.461 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:43.461 13:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:43.722 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.722 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:43.722 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.984 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:43.984 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:44.246 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:44.246 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:44.507 13:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:44.507 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:44.507 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.768 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:44.768 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:45.028 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:45.028 [2024-12-06 13:17:31.642198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.028 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:45.288 13:17:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:45.549 13:17:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:46.934 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:46.934 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:46.934 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:46.934 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:46.934 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:46.934 13:17:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:48.846 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:48.846 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:48.846 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:49.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:49.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:49.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:49.107 13:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:49.107 [global] 00:09:49.107 thread=1 00:09:49.107 invalidate=1 00:09:49.107 rw=write 00:09:49.107 time_based=1 00:09:49.107 runtime=1 00:09:49.107 ioengine=libaio 00:09:49.107 direct=1 00:09:49.107 bs=4096 00:09:49.107 iodepth=1 00:09:49.107 norandommap=0 00:09:49.107 numjobs=1 00:09:49.107 00:09:49.107 verify_dump=1 00:09:49.107 verify_backlog=512 00:09:49.107 verify_state_save=0 00:09:49.107 do_verify=1 00:09:49.107 verify=crc32c-intel 00:09:49.107 [job0] 00:09:49.107 filename=/dev/nvme0n1 00:09:49.107 [job1] 00:09:49.107 filename=/dev/nvme0n2 00:09:49.107 [job2] 00:09:49.107 filename=/dev/nvme0n3 00:09:49.107 [job3] 00:09:49.107 filename=/dev/nvme0n4 00:09:49.107 Could not set queue depth (nvme0n1) 00:09:49.107 Could not set queue depth (nvme0n2) 00:09:49.107 Could not set queue depth (nvme0n3) 00:09:49.107 Could not set queue depth (nvme0n4) 00:09:49.368 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.368 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.368 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.368 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.368 fio-3.35 00:09:49.368 Starting 4 threads 00:09:50.750 00:09:50.750 job0: (groupid=0, jobs=1): err= 0: pid=2009666: Fri Dec 6 13:17:37 2024 00:09:50.750 read: IOPS=486, BW=1948KiB/s (1994kB/s)(2004KiB/1029msec) 00:09:50.750 slat (nsec): min=7038, max=44915, avg=25478.87, stdev=4438.76 00:09:50.750 clat (usec): min=330, max=42273, avg=1339.92, stdev=4484.48 00:09:50.750 lat (usec): min=356, max=42299, avg=1365.40, stdev=4484.55 00:09:50.750 clat percentiles (usec): 00:09:50.751 | 1.00th=[ 461], 5.00th=[ 570], 10.00th=[ 635], 20.00th=[ 717], 00:09:50.751 | 30.00th=[ 791], 40.00th=[ 832], 50.00th=[ 865], 60.00th=[ 898], 00:09:50.751 | 70.00th=[ 930], 80.00th=[ 979], 90.00th=[ 1057], 95.00th=[ 1090], 00:09:50.751 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:50.751 | 99.99th=[42206] 00:09:50.751 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:09:50.751 slat (nsec): min=9789, max=52787, avg=32580.39, stdev=7430.06 00:09:50.751 clat (usec): min=171, max=1016, avg=623.64, stdev=143.06 00:09:50.751 lat (usec): min=205, max=1064, avg=656.22, stdev=144.77 00:09:50.751 clat percentiles (usec): 00:09:50.751 | 1.00th=[ 293], 5.00th=[ 388], 10.00th=[ 433], 20.00th=[ 502], 00:09:50.751 | 30.00th=[ 545], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 668], 00:09:50.751 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 807], 95.00th=[ 857], 00:09:50.751 | 99.00th=[ 930], 99.50th=[ 938], 99.90th=[ 1020], 99.95th=[ 1020], 00:09:50.751 | 99.99th=[ 1020] 00:09:50.751 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:09:50.751 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:50.751 lat (usec) : 250=0.20%, 500=10.86%, 750=42.05%, 1000=38.60% 00:09:50.751 lat (msec) : 2=7.70%, 50=0.59% 00:09:50.751 cpu : usr=1.17%, sys=3.31%, ctx=1015, majf=0, minf=1 00:09:50.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.751 issued rwts: total=501,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.751 job1: (groupid=0, jobs=1): err= 0: pid=2009686: Fri Dec 6 13:17:37 2024 00:09:50.751 read: IOPS=17, BW=69.2KiB/s (70.9kB/s)(72.0KiB/1040msec) 00:09:50.751 slat (nsec): min=7344, max=26485, avg=24205.22, stdev=5588.11 00:09:50.751 clat (usec): min=782, max=42957, avg=39770.93, stdev=9735.87 00:09:50.751 lat (usec): min=793, max=42983, avg=39795.14, stdev=9739.33 00:09:50.751 clat percentiles (usec): 00:09:50.751 | 1.00th=[ 783], 5.00th=[ 783], 10.00th=[41681], 20.00th=[41681], 00:09:50.751 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:50.751 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:09:50.751 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:50.751 | 99.99th=[42730] 00:09:50.751 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:09:50.751 slat (nsec): min=9850, max=62311, avg=32311.56, stdev=9893.07 00:09:50.751 clat (usec): min=235, max=932, avg=592.39, stdev=116.25 00:09:50.751 lat (usec): min=247, max=965, avg=624.70, stdev=118.19 00:09:50.751 clat percentiles (usec): 00:09:50.751 | 1.00th=[ 343], 5.00th=[ 383], 10.00th=[ 433], 20.00th=[ 486], 00:09:50.751 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 627], 00:09:50.751 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 734], 95.00th=[ 758], 00:09:50.751 | 99.00th=[ 824], 99.50th=[ 857], 99.90th=[ 930], 99.95th=[ 930], 00:09:50.751 | 99.99th=[ 930] 00:09:50.751 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:09:50.751 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:50.751 lat (usec) : 250=0.19%, 500=21.89%, 750=67.36%, 1000=7.36% 00:09:50.751 lat (msec) : 50=3.21% 00:09:50.751 cpu : usr=0.87%, sys=2.21%, ctx=530, majf=0, minf=1 00:09:50.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.751 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.751 job2: (groupid=0, jobs=1): err= 0: pid=2009707: Fri Dec 6 13:17:37 2024 00:09:50.751 read: IOPS=749, BW=2997KiB/s (3069kB/s)(3000KiB/1001msec) 00:09:50.751 slat (nsec): min=6963, max=59479, avg=25979.22, stdev=5479.64 00:09:50.751 clat (usec): min=189, max=1203, avg=634.54, stdev=185.55 00:09:50.751 lat (usec): min=197, max=1229, avg=660.52, stdev=186.06 00:09:50.751 clat percentiles (usec): 00:09:50.751 | 1.00th=[ 251], 5.00th=[ 359], 10.00th=[ 392], 20.00th=[ 474], 00:09:50.751 | 30.00th=[ 519], 40.00th=[ 570], 50.00th=[ 619], 60.00th=[ 676], 00:09:50.751 | 70.00th=[ 734], 80.00th=[ 824], 90.00th=[ 889], 95.00th=[ 947], 00:09:50.751 | 99.00th=[ 1057], 99.50th=[ 1123], 99.90th=[ 1205], 99.95th=[ 1205], 00:09:50.751 | 99.99th=[ 1205] 00:09:50.751 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:50.751 slat (nsec): min=8965, max=55687, avg=30451.44, stdev=10272.85 00:09:50.751 clat (usec): min=109, max=919, avg=449.35, stdev=175.70 00:09:50.751 lat (usec): min=120, max=953, avg=479.80, stdev=179.20 00:09:50.751 clat percentiles (usec): 00:09:50.751 | 1.00th=[ 121], 5.00th=[ 147], 10.00th=[ 227], 20.00th=[ 285], 00:09:50.751 | 30.00th=[ 334], 40.00th=[ 388], 50.00th=[ 445], 60.00th=[ 502], 00:09:50.751 | 70.00th=[ 562], 80.00th=[ 619], 90.00th=[ 693], 95.00th=[ 725], 00:09:50.751 | 99.00th=[ 799], 99.50th=[ 824], 99.90th=[ 906], 99.95th=[ 922], 00:09:50.751 | 99.99th=[ 922] 00:09:50.751 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:09:50.751 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:50.751 lat (usec) : 250=8.51%, 500=36.87%, 750=41.09%, 1000=12.63% 00:09:50.751 lat (msec) : 2=0.90% 00:09:50.751 cpu : usr=2.20%, sys=5.60%, ctx=1777, majf=0, minf=1 00:09:50.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.751 issued rwts: total=750,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.751 job3: (groupid=0, jobs=1): err= 0: pid=2009714: Fri Dec 6 13:17:37 2024 00:09:50.751 read: IOPS=16, BW=67.4KiB/s (69.0kB/s)(68.0KiB/1009msec) 00:09:50.751 slat (nsec): min=26988, max=28293, avg=27346.88, stdev=336.07 00:09:50.751 clat (usec): min=40942, max=42860, avg=41981.55, stdev=463.08 00:09:50.751 lat (usec): min=40969, max=42887, avg=42008.90, stdev=463.05 00:09:50.751 clat percentiles (usec): 00:09:50.751 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:50.751 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:50.751 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:09:50.751 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:50.751 | 99.99th=[42730] 00:09:50.751 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:09:50.751 slat (usec): min=9, max=106, avg=31.00, stdev=10.10 00:09:50.751 clat (usec): min=186, max=895, avg=537.97, stdev=125.89 00:09:50.751 lat (usec): min=221, max=929, avg=568.96, stdev=129.02 00:09:50.751 clat percentiles (usec): 00:09:50.751 | 1.00th=[ 241], 5.00th=[ 322], 10.00th=[ 363], 20.00th=[ 441], 00:09:50.751 | 30.00th=[ 478], 40.00th=[ 502], 50.00th=[ 537], 60.00th=[ 578], 00:09:50.751 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 734], 00:09:50.751 | 99.00th=[ 791], 99.50th=[ 799], 99.90th=[ 898], 99.95th=[ 898], 00:09:50.751 | 99.99th=[ 898] 00:09:50.751 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:09:50.751 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:50.751 lat (usec) : 250=1.51%, 500=35.92%, 750=55.20%, 1000=4.16% 00:09:50.751 lat (msec) : 50=3.21% 00:09:50.751 cpu : usr=0.99%, sys=1.98%, ctx=530, majf=0, minf=1 00:09:50.751 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.751 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.751 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.751 00:09:50.751 Run status group 0 (all jobs): 00:09:50.751 READ: bw=4946KiB/s (5065kB/s), 67.4KiB/s-2997KiB/s (69.0kB/s-3069kB/s), io=5144KiB (5267kB), run=1001-1040msec 00:09:50.751 WRITE: bw=9846KiB/s (10.1MB/s), 1969KiB/s-4092KiB/s (2016kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1040msec 00:09:50.751 00:09:50.751 Disk stats (read/write): 00:09:50.751 nvme0n1: ios=521/512, merge=0/0, ticks=1410/307, in_queue=1717, util=96.39% 00:09:50.751 nvme0n2: ios=49/512, merge=0/0, ticks=555/261, in_queue=816, util=87.64% 00:09:50.751 nvme0n3: ios=534/991, merge=0/0, ticks=1230/422, in_queue=1652, util=96.83% 00:09:50.751 nvme0n4: ios=12/512, merge=0/0, ticks=504/223, in_queue=727, util=89.40% 00:09:50.751 13:17:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:50.751 [global] 00:09:50.751 thread=1 00:09:50.751 invalidate=1 00:09:50.751 rw=randwrite 00:09:50.751 time_based=1 00:09:50.751 runtime=1 00:09:50.751 ioengine=libaio 00:09:50.751 direct=1 00:09:50.751 bs=4096 00:09:50.751 iodepth=1 00:09:50.751 norandommap=0 00:09:50.751 numjobs=1 00:09:50.751 00:09:50.751 verify_dump=1 00:09:50.751 verify_backlog=512 00:09:50.751 verify_state_save=0 00:09:50.751 do_verify=1 00:09:50.751 verify=crc32c-intel 00:09:50.751 [job0] 00:09:50.751 filename=/dev/nvme0n1 00:09:50.751 [job1] 00:09:50.751 filename=/dev/nvme0n2 00:09:50.751 [job2] 00:09:50.751 filename=/dev/nvme0n3 00:09:50.751 [job3] 00:09:50.751 filename=/dev/nvme0n4 00:09:50.751 Could not set queue depth (nvme0n1) 00:09:50.751 Could not set queue depth (nvme0n2) 00:09:50.751 Could not set queue depth (nvme0n3) 00:09:50.751 Could not set queue depth (nvme0n4) 00:09:51.011 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.011 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.011 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.011 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.011 fio-3.35 00:09:51.011 Starting 4 threads 00:09:52.395 00:09:52.395 job0: (groupid=0, jobs=1): err= 0: pid=2010174: Fri Dec 6 13:17:38 2024 00:09:52.395 read: IOPS=702, BW=2809KiB/s (2877kB/s)(2812KiB/1001msec) 00:09:52.395 slat (nsec): min=6325, max=46784, avg=24179.05, stdev=7417.94 00:09:52.395 clat (usec): min=460, max=1104, avg=773.87, stdev=100.40 00:09:52.395 lat (usec): min=468, max=1130, avg=798.04, stdev=102.38 00:09:52.395 clat percentiles (usec): 00:09:52.395 | 1.00th=[ 537], 5.00th=[ 611], 10.00th=[ 635], 20.00th=[ 685], 00:09:52.395 | 30.00th=[ 717], 40.00th=[ 750], 50.00th=[ 783], 60.00th=[ 816], 00:09:52.395 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 889], 95.00th=[ 938], 00:09:52.395 | 99.00th=[ 996], 99.50th=[ 1012], 99.90th=[ 1106], 99.95th=[ 1106], 00:09:52.395 | 99.99th=[ 1106] 00:09:52.396 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:52.396 slat (nsec): min=8706, max=69132, avg=28872.68, stdev=9490.44 00:09:52.396 clat (usec): min=119, max=1689, avg=387.20, stdev=111.25 00:09:52.396 lat (usec): min=151, max=1721, avg=416.07, stdev=113.33 00:09:52.396 clat percentiles (usec): 00:09:52.396 | 1.00th=[ 200], 5.00th=[ 235], 10.00th=[ 281], 20.00th=[ 302], 00:09:52.396 | 30.00th=[ 314], 40.00th=[ 338], 50.00th=[ 375], 60.00th=[ 404], 00:09:52.396 | 70.00th=[ 433], 80.00th=[ 478], 90.00th=[ 529], 95.00th=[ 570], 00:09:52.396 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 1205], 99.95th=[ 1696], 00:09:52.396 | 99.99th=[ 1696] 00:09:52.396 bw ( KiB/s): min= 4096, max= 4096, per=37.39%, avg=4096.00, stdev= 0.00, samples=1 00:09:52.396 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:52.396 lat (usec) : 250=3.88%, 500=46.27%, 750=25.19%, 1000=24.15% 00:09:52.396 lat (msec) : 2=0.52% 00:09:52.396 cpu : usr=3.70%, sys=6.10%, ctx=1728, majf=0, minf=1 00:09:52.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.396 issued rwts: total=703,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.396 job1: (groupid=0, jobs=1): err= 0: pid=2010189: Fri Dec 6 13:17:38 2024 00:09:52.396 read: IOPS=15, BW=63.3KiB/s (64.8kB/s)(64.0KiB/1011msec) 00:09:52.396 slat (nsec): min=26517, max=26978, avg=26717.31, stdev=116.17 00:09:52.396 clat (usec): min=1180, max=43100, avg=39576.21, stdev=10245.47 00:09:52.396 lat (usec): min=1207, max=43127, avg=39602.93, stdev=10245.40 00:09:52.396 clat percentiles (usec): 00:09:52.396 | 1.00th=[ 1188], 5.00th=[ 1188], 10.00th=[41681], 20.00th=[41681], 00:09:52.396 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:52.396 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:09:52.396 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:52.396 | 99.99th=[43254] 00:09:52.396 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:09:52.396 slat (nsec): min=9051, max=70125, avg=30563.24, stdev=9592.85 00:09:52.396 clat (usec): min=190, max=1054, avg=697.57, stdev=125.46 00:09:52.396 lat (usec): min=200, max=1092, avg=728.14, stdev=129.85 00:09:52.396 clat percentiles (usec): 00:09:52.396 | 1.00th=[ 388], 5.00th=[ 457], 10.00th=[ 519], 20.00th=[ 594], 00:09:52.396 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[ 709], 60.00th=[ 742], 00:09:52.396 | 70.00th=[ 775], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 873], 00:09:52.396 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 1057], 99.95th=[ 1057], 00:09:52.396 | 99.99th=[ 1057] 00:09:52.396 bw ( KiB/s): min= 4096, max= 4096, per=37.39%, avg=4096.00, stdev= 0.00, samples=1 00:09:52.396 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:52.396 lat (usec) : 250=0.19%, 500=7.01%, 750=54.36%, 1000=35.23% 00:09:52.396 lat (msec) : 2=0.38%, 50=2.84% 00:09:52.396 cpu : usr=1.19%, sys=1.88%, ctx=528, majf=0, minf=1 00:09:52.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.396 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.396 job2: (groupid=0, jobs=1): err= 0: pid=2010209: Fri Dec 6 13:17:38 2024 00:09:52.396 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:52.396 slat (nsec): min=9883, max=57229, avg=25623.10, stdev=2804.97 00:09:52.396 clat (usec): min=758, max=1239, avg=1011.81, stdev=71.15 00:09:52.396 lat (usec): min=783, max=1264, avg=1037.44, stdev=70.76 00:09:52.396 clat percentiles (usec): 00:09:52.396 | 1.00th=[ 807], 5.00th=[ 889], 10.00th=[ 922], 20.00th=[ 971], 00:09:52.396 | 30.00th=[ 988], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1029], 00:09:52.396 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:09:52.396 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1237], 99.95th=[ 1237], 00:09:52.396 | 99.99th=[ 1237] 00:09:52.396 write: IOPS=736, BW=2945KiB/s (3016kB/s)(2948KiB/1001msec); 0 zone resets 00:09:52.396 slat (nsec): min=9195, max=64923, avg=28206.02, stdev=8711.11 00:09:52.396 clat (usec): min=230, max=1305, avg=595.21, stdev=118.08 00:09:52.396 lat (usec): min=243, max=1338, avg=623.42, stdev=121.57 00:09:52.396 clat percentiles (usec): 00:09:52.396 | 1.00th=[ 281], 5.00th=[ 379], 10.00th=[ 437], 20.00th=[ 494], 00:09:52.396 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:09:52.396 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 758], 00:09:52.396 | 99.00th=[ 807], 99.50th=[ 898], 99.90th=[ 1303], 99.95th=[ 1303], 00:09:52.396 | 99.99th=[ 1303] 00:09:52.396 bw ( KiB/s): min= 4096, max= 4096, per=37.39%, avg=4096.00, stdev= 0.00, samples=1 00:09:52.396 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:52.396 lat (usec) : 250=0.24%, 500=12.33%, 750=42.83%, 1000=18.09% 00:09:52.396 lat (msec) : 2=26.50% 00:09:52.396 cpu : usr=1.60%, sys=3.80%, ctx=1249, majf=0, minf=2 00:09:52.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.396 issued rwts: total=512,737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.396 job3: (groupid=0, jobs=1): err= 0: pid=2010216: Fri Dec 6 13:17:38 2024 00:09:52.396 read: IOPS=16, BW=66.9KiB/s (68.5kB/s)(68.0KiB/1017msec) 00:09:52.396 slat (nsec): min=8486, max=28441, avg=26965.65, stdev=4766.28 00:09:52.396 clat (usec): min=1008, max=42958, avg=39690.89, stdev=9976.64 00:09:52.396 lat (usec): min=1037, max=42986, avg=39717.86, stdev=9976.36 00:09:52.396 clat percentiles (usec): 00:09:52.396 | 1.00th=[ 1012], 5.00th=[ 1012], 10.00th=[41157], 20.00th=[41681], 00:09:52.396 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:52.396 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:09:52.396 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:52.396 | 99.99th=[42730] 00:09:52.396 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:09:52.396 slat (nsec): min=9491, max=55375, avg=31920.50, stdev=9993.28 00:09:52.396 clat (usec): min=161, max=902, avg=626.39, stdev=117.45 00:09:52.396 lat (usec): min=173, max=937, avg=658.31, stdev=122.47 00:09:52.396 clat percentiles (usec): 00:09:52.396 | 1.00th=[ 343], 5.00th=[ 420], 10.00th=[ 469], 20.00th=[ 537], 00:09:52.396 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 660], 00:09:52.396 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 807], 00:09:52.396 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 906], 99.95th=[ 906], 00:09:52.396 | 99.99th=[ 906] 00:09:52.396 bw ( KiB/s): min= 4096, max= 4096, per=37.39%, avg=4096.00, stdev= 0.00, samples=1 00:09:52.396 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:52.396 lat (usec) : 250=0.19%, 500=13.80%, 750=69.00%, 1000=13.80% 00:09:52.396 lat (msec) : 2=0.19%, 50=3.02% 00:09:52.396 cpu : usr=1.38%, sys=1.77%, ctx=532, majf=0, minf=1 00:09:52.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.396 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.396 00:09:52.396 Run status group 0 (all jobs): 00:09:52.396 READ: bw=4909KiB/s (5026kB/s), 63.3KiB/s-2809KiB/s (64.8kB/s-2877kB/s), io=4992KiB (5112kB), run=1001-1017msec 00:09:52.396 WRITE: bw=10.7MiB/s (11.2MB/s), 2014KiB/s-4092KiB/s (2062kB/s-4190kB/s), io=10.9MiB (11.4MB), run=1001-1017msec 00:09:52.396 00:09:52.396 Disk stats (read/write): 00:09:52.396 nvme0n1: ios=562/994, merge=0/0, ticks=382/276, in_queue=658, util=86.27% 00:09:52.396 nvme0n2: ios=48/512, merge=0/0, ticks=551/285, in_queue=836, util=95.92% 00:09:52.396 nvme0n3: ios=487/512, merge=0/0, ticks=471/293, in_queue=764, util=88.38% 00:09:52.396 nvme0n4: ios=34/512, merge=0/0, ticks=1381/248, in_queue=1629, util=97.11% 00:09:52.396 13:17:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:52.396 [global] 00:09:52.396 thread=1 00:09:52.396 invalidate=1 00:09:52.396 rw=write 00:09:52.396 time_based=1 00:09:52.396 runtime=1 00:09:52.396 ioengine=libaio 00:09:52.396 direct=1 00:09:52.396 bs=4096 00:09:52.396 iodepth=128 00:09:52.396 norandommap=0 00:09:52.396 numjobs=1 00:09:52.396 00:09:52.396 verify_dump=1 00:09:52.396 verify_backlog=512 00:09:52.396 verify_state_save=0 00:09:52.396 do_verify=1 00:09:52.396 verify=crc32c-intel 00:09:52.396 [job0] 00:09:52.396 filename=/dev/nvme0n1 00:09:52.396 [job1] 00:09:52.396 filename=/dev/nvme0n2 00:09:52.396 [job2] 00:09:52.396 filename=/dev/nvme0n3 00:09:52.396 [job3] 00:09:52.396 filename=/dev/nvme0n4 00:09:52.396 Could not set queue depth (nvme0n1) 00:09:52.396 Could not set queue depth (nvme0n2) 00:09:52.396 Could not set queue depth (nvme0n3) 00:09:52.396 Could not set queue depth (nvme0n4) 00:09:52.965 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.965 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.965 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.965 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.965 fio-3.35 00:09:52.965 Starting 4 threads 00:09:53.907 00:09:53.907 job0: (groupid=0, jobs=1): err= 0: pid=2010662: Fri Dec 6 13:17:40 2024 00:09:53.907 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:09:53.907 slat (nsec): min=921, max=26504k, avg=146195.64, stdev=1036485.31 00:09:53.907 clat (usec): min=2392, max=77519, avg=19148.88, stdev=15120.14 00:09:53.907 lat (usec): min=2396, max=82114, avg=19295.08, stdev=15249.92 00:09:53.907 clat percentiles (usec): 00:09:53.907 | 1.00th=[ 4015], 5.00th=[ 7111], 10.00th=[ 8291], 20.00th=[ 8586], 00:09:53.907 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[11863], 60.00th=[15795], 00:09:53.907 | 70.00th=[17957], 80.00th=[34866], 90.00th=[47973], 95.00th=[51119], 00:09:53.907 | 99.00th=[58983], 99.50th=[62129], 99.90th=[77071], 99.95th=[77071], 00:09:53.907 | 99.99th=[77071] 00:09:53.907 write: IOPS=4021, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1003msec); 0 zone resets 00:09:53.907 slat (nsec): min=1645, max=14502k, avg=112112.39, stdev=725590.35 00:09:53.907 clat (usec): min=1245, max=85240, avg=14451.93, stdev=13855.50 00:09:53.907 lat (usec): min=1256, max=85252, avg=14564.04, stdev=13952.97 00:09:53.907 clat percentiles (usec): 00:09:53.907 | 1.00th=[ 2540], 5.00th=[ 4686], 10.00th=[ 5735], 20.00th=[ 7832], 00:09:53.907 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[11338], 00:09:53.907 | 70.00th=[14746], 80.00th=[16450], 90.00th=[24773], 95.00th=[47973], 00:09:53.907 | 99.00th=[79168], 99.50th=[82314], 99.90th=[85459], 99.95th=[85459], 00:09:53.907 | 99.99th=[85459] 00:09:53.907 bw ( KiB/s): min= 7536, max=23720, per=15.81%, avg=15628.00, stdev=11443.82, samples=2 00:09:53.908 iops : min= 1884, max= 5930, avg=3907.00, stdev=2860.95, samples=2 00:09:53.908 lat (msec) : 2=0.37%, 4=2.39%, 10=48.27%, 20=27.68%, 50=14.72% 00:09:53.908 lat (msec) : 100=6.58% 00:09:53.908 cpu : usr=3.09%, sys=3.69%, ctx=410, majf=0, minf=1 00:09:53.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:53.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.908 issued rwts: total=3584,4034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.908 job1: (groupid=0, jobs=1): err= 0: pid=2010667: Fri Dec 6 13:17:40 2024 00:09:53.908 read: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec) 00:09:53.908 slat (nsec): min=1018, max=9625.3k, avg=64062.41, stdev=472984.45 00:09:53.908 clat (usec): min=2262, max=40317, avg=8949.88, stdev=4510.34 00:09:53.908 lat (usec): min=2265, max=40324, avg=9013.94, stdev=4530.44 00:09:53.908 clat percentiles (usec): 00:09:53.908 | 1.00th=[ 4293], 5.00th=[ 5538], 10.00th=[ 5800], 20.00th=[ 6390], 00:09:53.908 | 30.00th=[ 6783], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 8291], 00:09:53.908 | 70.00th=[ 9241], 80.00th=[10552], 90.00th=[13829], 95.00th=[15926], 00:09:53.908 | 99.00th=[34866], 99.50th=[34866], 99.90th=[40109], 99.95th=[40109], 00:09:53.908 | 99.99th=[40109] 00:09:53.908 write: IOPS=7799, BW=30.5MiB/s (31.9MB/s)(30.6MiB/1006msec); 0 zone resets 00:09:53.908 slat (nsec): min=1678, max=29064k, avg=56623.45, stdev=492518.92 00:09:53.908 clat (usec): min=1341, max=22461, avg=7475.52, stdev=2970.87 00:09:53.908 lat (usec): min=1352, max=36169, avg=7532.14, stdev=3018.28 00:09:53.908 clat percentiles (usec): 00:09:53.908 | 1.00th=[ 2638], 5.00th=[ 3851], 10.00th=[ 4228], 20.00th=[ 5538], 00:09:53.908 | 30.00th=[ 6325], 40.00th=[ 6652], 50.00th=[ 6915], 60.00th=[ 7177], 00:09:53.908 | 70.00th=[ 7832], 80.00th=[ 9110], 90.00th=[11207], 95.00th=[12780], 00:09:53.908 | 99.00th=[19530], 99.50th=[21103], 99.90th=[22152], 99.95th=[22414], 00:09:53.908 | 99.99th=[22414] 00:09:53.908 bw ( KiB/s): min=29352, max=32400, per=31.24%, avg=30876.00, stdev=2155.26, samples=2 00:09:53.908 iops : min= 7338, max= 8100, avg=7719.00, stdev=538.82, samples=2 00:09:53.908 lat (msec) : 2=0.21%, 4=3.76%, 10=77.87%, 20=16.89%, 50=1.26% 00:09:53.908 cpu : usr=6.07%, sys=9.35%, ctx=596, majf=0, minf=1 00:09:53.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:53.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.908 issued rwts: total=7680,7846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.908 job2: (groupid=0, jobs=1): err= 0: pid=2010686: Fri Dec 6 13:17:40 2024 00:09:53.908 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:09:53.908 slat (nsec): min=990, max=10903k, avg=79261.95, stdev=556275.30 00:09:53.908 clat (usec): min=1514, max=31278, avg=10360.45, stdev=3616.73 00:09:53.908 lat (usec): min=1526, max=31281, avg=10439.71, stdev=3654.18 00:09:53.908 clat percentiles (usec): 00:09:53.908 | 1.00th=[ 3720], 5.00th=[ 6390], 10.00th=[ 7308], 20.00th=[ 7963], 00:09:53.908 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10028], 00:09:53.908 | 70.00th=[10683], 80.00th=[12256], 90.00th=[15795], 95.00th=[17695], 00:09:53.908 | 99.00th=[22152], 99.50th=[24249], 99.90th=[31327], 99.95th=[31327], 00:09:53.908 | 99.99th=[31327] 00:09:53.908 write: IOPS=6088, BW=23.8MiB/s (24.9MB/s)(23.9MiB/1004msec); 0 zone resets 00:09:53.908 slat (nsec): min=1665, max=32534k, avg=81679.71, stdev=750844.19 00:09:53.908 clat (usec): min=518, max=46274, avg=10884.56, stdev=7030.75 00:09:53.908 lat (usec): min=1199, max=46286, avg=10966.24, stdev=7077.25 00:09:53.908 clat percentiles (usec): 00:09:53.908 | 1.00th=[ 1958], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 6390], 00:09:53.908 | 30.00th=[ 7177], 40.00th=[ 8094], 50.00th=[ 8979], 60.00th=[ 9634], 00:09:53.908 | 70.00th=[11207], 80.00th=[14615], 90.00th=[19530], 95.00th=[23725], 00:09:53.908 | 99.00th=[43254], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:09:53.908 | 99.99th=[46400] 00:09:53.908 bw ( KiB/s): min=17704, max=30176, per=24.23%, avg=23940.00, stdev=8819.04, samples=2 00:09:53.908 iops : min= 4426, max= 7544, avg=5985.00, stdev=2204.76, samples=2 00:09:53.908 lat (usec) : 750=0.01% 00:09:53.908 lat (msec) : 2=0.73%, 4=1.69%, 10=60.13%, 20=32.04%, 50=5.40% 00:09:53.908 cpu : usr=4.29%, sys=7.78%, ctx=308, majf=0, minf=1 00:09:53.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:53.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.908 issued rwts: total=5632,6113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.908 job3: (groupid=0, jobs=1): err= 0: pid=2010694: Fri Dec 6 13:17:40 2024 00:09:53.908 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:09:53.908 slat (nsec): min=932, max=14105k, avg=70939.21, stdev=537856.83 00:09:53.908 clat (usec): min=3196, max=31636, avg=9535.30, stdev=3369.20 00:09:53.908 lat (usec): min=3202, max=31663, avg=9606.24, stdev=3404.81 00:09:53.908 clat percentiles (usec): 00:09:53.908 | 1.00th=[ 3589], 5.00th=[ 5538], 10.00th=[ 6915], 20.00th=[ 7701], 00:09:53.908 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:09:53.908 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[13435], 95.00th=[16712], 00:09:53.908 | 99.00th=[23200], 99.50th=[23200], 99.90th=[26870], 99.95th=[30016], 00:09:53.908 | 99.99th=[31589] 00:09:53.908 write: IOPS=6839, BW=26.7MiB/s (28.0MB/s)(26.8MiB/1003msec); 0 zone resets 00:09:53.908 slat (nsec): min=1587, max=17756k, avg=69785.38, stdev=499100.47 00:09:53.908 clat (usec): min=673, max=33408, avg=9283.57, stdev=3831.81 00:09:53.908 lat (usec): min=2023, max=33421, avg=9353.36, stdev=3854.67 00:09:53.908 clat percentiles (usec): 00:09:53.908 | 1.00th=[ 3589], 5.00th=[ 5211], 10.00th=[ 6063], 20.00th=[ 7373], 00:09:53.908 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 8848], 00:09:53.908 | 70.00th=[ 9241], 80.00th=[10290], 90.00th=[12125], 95.00th=[12911], 00:09:53.908 | 99.00th=[26346], 99.50th=[30016], 99.90th=[33424], 99.95th=[33424], 00:09:53.908 | 99.99th=[33424] 00:09:53.908 bw ( KiB/s): min=25192, max=28672, per=27.25%, avg=26932.00, stdev=2460.73, samples=2 00:09:53.908 iops : min= 6298, max= 7168, avg=6733.00, stdev=615.18, samples=2 00:09:53.908 lat (usec) : 750=0.01% 00:09:53.908 lat (msec) : 4=1.57%, 10=76.32%, 20=18.79%, 50=3.31% 00:09:53.908 cpu : usr=5.29%, sys=7.19%, ctx=383, majf=0, minf=1 00:09:53.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:53.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.908 issued rwts: total=6656,6860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.908 00:09:53.908 Run status group 0 (all jobs): 00:09:53.908 READ: bw=91.5MiB/s (95.9MB/s), 14.0MiB/s-29.8MiB/s (14.6MB/s-31.3MB/s), io=92.0MiB (96.5MB), run=1003-1006msec 00:09:53.908 WRITE: bw=96.5MiB/s (101MB/s), 15.7MiB/s-30.5MiB/s (16.5MB/s-31.9MB/s), io=97.1MiB (102MB), run=1003-1006msec 00:09:53.908 00:09:53.908 Disk stats (read/write): 00:09:53.908 nvme0n1: ios=3564/3584, merge=0/0, ticks=29673/17376, in_queue=47049, util=84.57% 00:09:53.908 nvme0n2: ios=6199/6287, merge=0/0, ticks=51481/45155, in_queue=96636, util=90.62% 00:09:53.908 nvme0n3: ios=4658/4672, merge=0/0, ticks=32602/29905, in_queue=62507, util=93.88% 00:09:53.908 nvme0n4: ios=5529/5632, merge=0/0, ticks=36153/29961, in_queue=66114, util=96.16% 00:09:53.908 13:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:54.168 [global] 00:09:54.168 thread=1 00:09:54.168 invalidate=1 00:09:54.168 rw=randwrite 00:09:54.168 time_based=1 00:09:54.168 runtime=1 00:09:54.168 ioengine=libaio 00:09:54.168 direct=1 00:09:54.168 bs=4096 00:09:54.168 iodepth=128 00:09:54.168 norandommap=0 00:09:54.168 numjobs=1 00:09:54.168 00:09:54.168 verify_dump=1 00:09:54.168 verify_backlog=512 00:09:54.168 verify_state_save=0 00:09:54.168 do_verify=1 00:09:54.168 verify=crc32c-intel 00:09:54.168 [job0] 00:09:54.168 filename=/dev/nvme0n1 00:09:54.168 [job1] 00:09:54.168 filename=/dev/nvme0n2 00:09:54.168 [job2] 00:09:54.168 filename=/dev/nvme0n3 00:09:54.168 [job3] 00:09:54.168 filename=/dev/nvme0n4 00:09:54.168 Could not set queue depth (nvme0n1) 00:09:54.168 Could not set queue depth (nvme0n2) 00:09:54.168 Could not set queue depth (nvme0n3) 00:09:54.168 Could not set queue depth (nvme0n4) 00:09:54.429 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.429 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.429 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.429 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.429 fio-3.35 00:09:54.429 Starting 4 threads 00:09:55.810 00:09:55.810 job0: (groupid=0, jobs=1): err= 0: pid=2011177: Fri Dec 6 13:17:42 2024 00:09:55.810 read: IOPS=8009, BW=31.3MiB/s (32.8MB/s)(31.5MiB/1006msec) 00:09:55.810 slat (nsec): min=975, max=18335k, avg=64549.14, stdev=523706.85 00:09:55.810 clat (usec): min=1712, max=28084, avg=8088.80, stdev=3305.53 00:09:55.810 lat (usec): min=2264, max=28104, avg=8153.34, stdev=3331.61 00:09:55.810 clat percentiles (usec): 00:09:55.810 | 1.00th=[ 3261], 5.00th=[ 5538], 10.00th=[ 5866], 20.00th=[ 6521], 00:09:55.810 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7373], 00:09:55.810 | 70.00th=[ 7963], 80.00th=[ 8979], 90.00th=[11076], 95.00th=[12649], 00:09:55.810 | 99.00th=[24773], 99.50th=[27657], 99.90th=[27919], 99.95th=[28181], 00:09:55.810 | 99.99th=[28181] 00:09:55.810 write: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec); 0 zone resets 00:09:55.810 slat (nsec): min=1630, max=10270k, avg=53557.87, stdev=292917.67 00:09:55.810 clat (usec): min=1338, max=28082, avg=7611.59, stdev=3165.83 00:09:55.810 lat (usec): min=1348, max=28091, avg=7665.15, stdev=3192.35 00:09:55.810 clat percentiles (usec): 00:09:55.810 | 1.00th=[ 2278], 5.00th=[ 3654], 10.00th=[ 4555], 20.00th=[ 5866], 00:09:55.810 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7046], 00:09:55.810 | 70.00th=[ 7242], 80.00th=[ 8029], 90.00th=[11994], 95.00th=[14615], 00:09:55.810 | 99.00th=[18220], 99.50th=[19530], 99.90th=[20317], 99.95th=[20317], 00:09:55.810 | 99.99th=[28181] 00:09:55.810 bw ( KiB/s): min=32768, max=32768, per=33.38%, avg=32768.00, stdev= 0.00, samples=2 00:09:55.810 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:09:55.810 lat (msec) : 2=0.32%, 4=3.56%, 10=80.46%, 20=14.55%, 50=1.11% 00:09:55.810 cpu : usr=4.78%, sys=8.16%, ctx=910, majf=0, minf=2 00:09:55.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:55.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.810 issued rwts: total=8058,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.810 job1: (groupid=0, jobs=1): err= 0: pid=2011179: Fri Dec 6 13:17:42 2024 00:09:55.810 read: IOPS=8224, BW=32.1MiB/s (33.7MB/s)(32.3MiB/1004msec) 00:09:55.810 slat (nsec): min=922, max=6733.7k, avg=54306.13, stdev=402422.10 00:09:55.810 clat (usec): min=1774, max=19896, avg=7587.15, stdev=2092.58 00:09:55.810 lat (usec): min=2389, max=19922, avg=7641.45, stdev=2117.21 00:09:55.810 clat percentiles (usec): 00:09:55.810 | 1.00th=[ 3490], 5.00th=[ 4555], 10.00th=[ 5538], 20.00th=[ 6456], 00:09:55.810 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7373], 00:09:55.810 | 70.00th=[ 8160], 80.00th=[ 8848], 90.00th=[ 9765], 95.00th=[11600], 00:09:55.810 | 99.00th=[16057], 99.50th=[16319], 99.90th=[19530], 99.95th=[19530], 00:09:55.810 | 99.99th=[19792] 00:09:55.810 write: IOPS=8669, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1004msec); 0 zone resets 00:09:55.810 slat (nsec): min=1504, max=5827.4k, avg=52236.12, stdev=318717.63 00:09:55.810 clat (usec): min=592, max=30577, avg=7440.69, stdev=4067.31 00:09:55.810 lat (usec): min=622, max=30588, avg=7492.92, stdev=4093.70 00:09:55.810 clat percentiles (usec): 00:09:55.810 | 1.00th=[ 2180], 5.00th=[ 3490], 10.00th=[ 4146], 20.00th=[ 5014], 00:09:55.810 | 30.00th=[ 5866], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 7111], 00:09:55.810 | 70.00th=[ 7373], 80.00th=[ 7832], 90.00th=[11076], 95.00th=[13042], 00:09:55.810 | 99.00th=[26870], 99.50th=[29754], 99.90th=[30540], 99.95th=[30540], 00:09:55.810 | 99.99th=[30540] 00:09:55.810 bw ( KiB/s): min=32480, max=36648, per=35.21%, avg=34564.00, stdev=2947.22, samples=2 00:09:55.810 iops : min= 8120, max= 9162, avg=8641.00, stdev=736.81, samples=2 00:09:55.810 lat (usec) : 750=0.02%, 1000=0.05% 00:09:55.810 lat (msec) : 2=0.32%, 4=4.71%, 10=83.33%, 20=10.19%, 50=1.39% 00:09:55.810 cpu : usr=5.48%, sys=9.77%, ctx=767, majf=0, minf=2 00:09:55.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:55.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.810 issued rwts: total=8257,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.810 job2: (groupid=0, jobs=1): err= 0: pid=2011187: Fri Dec 6 13:17:42 2024 00:09:55.810 read: IOPS=5090, BW=19.9MiB/s (20.8MB/s)(20.7MiB/1043msec) 00:09:55.810 slat (nsec): min=967, max=8446.5k, avg=89039.90, stdev=563018.49 00:09:55.810 clat (usec): min=3537, max=57839, avg=12085.97, stdev=6968.07 00:09:55.810 lat (usec): min=3558, max=64021, avg=12175.01, stdev=6999.91 00:09:55.810 clat percentiles (usec): 00:09:55.810 | 1.00th=[ 6259], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 8848], 00:09:55.810 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11207], 60.00th=[11600], 00:09:55.810 | 70.00th=[12256], 80.00th=[13435], 90.00th=[14615], 95.00th=[17695], 00:09:55.810 | 99.00th=[57410], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:09:55.810 | 99.99th=[57934] 00:09:55.810 write: IOPS=5399, BW=21.1MiB/s (22.1MB/s)(22.0MiB/1043msec); 0 zone resets 00:09:55.811 slat (nsec): min=1519, max=16776k, avg=87576.42, stdev=541583.54 00:09:55.811 clat (usec): min=1059, max=31095, avg=12104.71, stdev=4498.22 00:09:55.811 lat (usec): min=1071, max=31097, avg=12192.28, stdev=4539.75 00:09:55.811 clat percentiles (usec): 00:09:55.811 | 1.00th=[ 2474], 5.00th=[ 5145], 10.00th=[ 6587], 20.00th=[ 8160], 00:09:55.811 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[11207], 60.00th=[13566], 00:09:55.811 | 70.00th=[15795], 80.00th=[16581], 90.00th=[18220], 95.00th=[18744], 00:09:55.811 | 99.00th=[20579], 99.50th=[22938], 99.90th=[31065], 99.95th=[31065], 00:09:55.811 | 99.99th=[31065] 00:09:55.811 bw ( KiB/s): min=20480, max=24576, per=22.95%, avg=22528.00, stdev=2896.31, samples=2 00:09:55.811 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:55.811 lat (msec) : 2=0.42%, 4=0.75%, 10=34.03%, 20=62.25%, 50=1.97% 00:09:55.811 lat (msec) : 100=0.58% 00:09:55.811 cpu : usr=2.78%, sys=6.62%, ctx=507, majf=0, minf=1 00:09:55.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:55.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.811 issued rwts: total=5309,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.811 job3: (groupid=0, jobs=1): err= 0: pid=2011192: Fri Dec 6 13:17:42 2024 00:09:55.811 read: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(10.3MiB/1043msec) 00:09:55.811 slat (nsec): min=953, max=7586.4k, avg=136737.68, stdev=769614.30 00:09:55.811 clat (usec): min=7235, max=54686, avg=17609.60, stdev=6184.74 00:09:55.811 lat (usec): min=7241, max=54695, avg=17746.33, stdev=6228.76 00:09:55.811 clat percentiles (usec): 00:09:55.811 | 1.00th=[ 9372], 5.00th=[12780], 10.00th=[14222], 20.00th=[14615], 00:09:55.811 | 30.00th=[15008], 40.00th=[15533], 50.00th=[15926], 60.00th=[16581], 00:09:55.811 | 70.00th=[17695], 80.00th=[19792], 90.00th=[21890], 95.00th=[24511], 00:09:55.811 | 99.00th=[48497], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:09:55.811 | 99.99th=[54789] 00:09:55.811 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1043msec); 0 zone resets 00:09:55.811 slat (nsec): min=1700, max=21585k, avg=203220.46, stdev=964019.11 00:09:55.811 clat (usec): min=1277, max=79352, avg=27981.63, stdev=15828.70 00:09:55.811 lat (usec): min=1309, max=79360, avg=28184.85, stdev=15924.53 00:09:55.811 clat percentiles (usec): 00:09:55.811 | 1.00th=[ 7373], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[15533], 00:09:55.811 | 30.00th=[19006], 40.00th=[23462], 50.00th=[25297], 60.00th=[27395], 00:09:55.811 | 70.00th=[31065], 80.00th=[36439], 90.00th=[53740], 95.00th=[60031], 00:09:55.811 | 99.00th=[74974], 99.50th=[77071], 99.90th=[79168], 99.95th=[79168], 00:09:55.811 | 99.99th=[79168] 00:09:55.811 bw ( KiB/s): min= 9808, max=14280, per=12.27%, avg=12044.00, stdev=3162.18, samples=2 00:09:55.811 iops : min= 2452, max= 3570, avg=3011.00, stdev=790.55, samples=2 00:09:55.811 lat (msec) : 2=0.04%, 10=6.34%, 20=48.84%, 50=38.22%, 100=6.56% 00:09:55.811 cpu : usr=0.86%, sys=3.93%, ctx=395, majf=0, minf=1 00:09:55.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:55.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.811 issued rwts: total=2626,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.811 00:09:55.811 Run status group 0 (all jobs): 00:09:55.811 READ: bw=90.8MiB/s (95.2MB/s), 9.83MiB/s-32.1MiB/s (10.3MB/s-33.7MB/s), io=94.7MiB (99.3MB), run=1004-1043msec 00:09:55.811 WRITE: bw=95.9MiB/s (101MB/s), 11.5MiB/s-33.9MiB/s (12.1MB/s-35.5MB/s), io=100MiB (105MB), run=1004-1043msec 00:09:55.811 00:09:55.811 Disk stats (read/write): 00:09:55.811 nvme0n1: ios=6873/7168, merge=0/0, ticks=52052/48333, in_queue=100385, util=95.99% 00:09:55.811 nvme0n2: ios=7203/7473, merge=0/0, ticks=45979/44254, in_queue=90233, util=99.29% 00:09:55.811 nvme0n3: ios=4433/4608, merge=0/0, ticks=32798/41659, in_queue=74457, util=95.68% 00:09:55.811 nvme0n4: ios=2075/2303, merge=0/0, ticks=11941/24402, in_queue=36343, util=96.69% 00:09:55.811 13:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:55.811 13:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2011499 00:09:55.811 13:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:55.811 13:17:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:55.811 [global] 00:09:55.811 thread=1 00:09:55.811 invalidate=1 00:09:55.811 rw=read 00:09:55.811 time_based=1 00:09:55.811 runtime=10 00:09:55.811 ioengine=libaio 00:09:55.811 direct=1 00:09:55.811 bs=4096 00:09:55.811 iodepth=1 00:09:55.811 norandommap=1 00:09:55.811 numjobs=1 00:09:55.811 00:09:55.811 [job0] 00:09:55.811 filename=/dev/nvme0n1 00:09:55.811 [job1] 00:09:55.811 filename=/dev/nvme0n2 00:09:55.811 [job2] 00:09:55.811 filename=/dev/nvme0n3 00:09:55.811 [job3] 00:09:55.811 filename=/dev/nvme0n4 00:09:55.811 Could not set queue depth (nvme0n1) 00:09:55.811 Could not set queue depth (nvme0n2) 00:09:55.811 Could not set queue depth (nvme0n3) 00:09:55.811 Could not set queue depth (nvme0n4) 00:09:56.071 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.071 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.071 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.071 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.071 fio-3.35 00:09:56.071 Starting 4 threads 00:09:58.612 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:58.871 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:09:58.871 fio: pid=2011714, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:58.871 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:59.130 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11059200, buflen=4096 00:09:59.130 fio: pid=2011709, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:59.130 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.130 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:59.130 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2408448, buflen=4096 00:09:59.130 fio: pid=2011699, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:59.390 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.390 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:59.390 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12349440, buflen=4096 00:09:59.390 fio: pid=2011704, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:59.390 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.390 13:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:59.390 00:09:59.390 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2011699: Fri Dec 6 13:17:45 2024 00:09:59.390 read: IOPS=200, BW=801KiB/s (820kB/s)(2352KiB/2938msec) 00:09:59.390 slat (usec): min=6, max=7571, avg=41.35, stdev=326.52 00:09:59.390 clat (usec): min=555, max=45726, avg=4946.72, stdev=12089.02 00:09:59.390 lat (usec): min=563, max=45750, avg=4983.97, stdev=12088.68 00:09:59.390 clat percentiles (usec): 00:09:59.390 | 1.00th=[ 750], 5.00th=[ 816], 10.00th=[ 889], 20.00th=[ 955], 00:09:59.390 | 30.00th=[ 979], 40.00th=[ 1012], 50.00th=[ 1045], 60.00th=[ 1074], 00:09:59.390 | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[ 1319], 95.00th=[42206], 00:09:59.390 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:09:59.390 | 99.99th=[45876] 00:09:59.390 bw ( KiB/s): min= 96, max= 2464, per=9.86%, avg=809.60, stdev=957.57, samples=5 00:09:59.390 iops : min= 24, max= 616, avg=202.40, stdev=239.39, samples=5 00:09:59.390 lat (usec) : 750=1.02%, 1000=37.01% 00:09:59.390 lat (msec) : 2=52.29%, 50=9.51% 00:09:59.390 cpu : usr=0.17%, sys=0.61%, ctx=592, majf=0, minf=1 00:09:59.390 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.390 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.390 issued rwts: total=589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.390 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.390 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2011704: Fri Dec 6 13:17:45 2024 00:09:59.390 read: IOPS=971, BW=3885KiB/s (3979kB/s)(11.8MiB/3104msec) 00:09:59.390 slat (usec): min=6, max=21004, avg=51.26, stdev=627.69 00:09:59.390 clat (usec): min=224, max=1580, avg=971.70, stdev=80.50 00:09:59.390 lat (usec): min=250, max=21976, avg=1022.97, stdev=633.11 00:09:59.390 clat percentiles (usec): 00:09:59.390 | 1.00th=[ 742], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 930], 00:09:59.390 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:09:59.390 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:09:59.390 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1188], 99.95th=[ 1205], 00:09:59.390 | 99.99th=[ 1582] 00:09:59.390 bw ( KiB/s): min= 3471, max= 4024, per=47.55%, avg=3901.17, stdev=211.99, samples=6 00:09:59.390 iops : min= 867, max= 1006, avg=975.17, stdev=53.30, samples=6 00:09:59.390 lat (usec) : 250=0.03%, 500=0.07%, 750=1.36%, 1000=59.22% 00:09:59.390 lat (msec) : 2=39.29% 00:09:59.390 cpu : usr=0.93%, sys=3.00%, ctx=3023, majf=0, minf=2 00:09:59.390 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.390 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.390 issued rwts: total=3016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.390 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.390 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2011709: Fri Dec 6 13:17:45 2024 00:09:59.390 read: IOPS=982, BW=3930KiB/s (4024kB/s)(10.5MiB/2748msec) 00:09:59.390 slat (usec): min=6, max=19693, avg=40.11, stdev=476.23 00:09:59.390 clat (usec): min=293, max=2671, avg=971.08, stdev=82.15 00:09:59.390 lat (usec): min=321, max=20645, avg=1011.20, stdev=482.32 00:09:59.390 clat percentiles (usec): 00:09:59.390 | 1.00th=[ 734], 5.00th=[ 816], 10.00th=[ 873], 20.00th=[ 930], 00:09:59.390 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 979], 60.00th=[ 996], 00:09:59.390 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:09:59.390 | 99.00th=[ 1106], 99.50th=[ 1172], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:59.390 | 99.99th=[ 2671] 00:09:59.390 bw ( KiB/s): min= 3952, max= 4032, per=48.64%, avg=3990.40, stdev=31.19, samples=5 00:09:59.390 iops : min= 988, max= 1008, avg=997.60, stdev= 7.80, samples=5 00:09:59.390 lat (usec) : 500=0.04%, 750=1.07%, 1000=63.20% 00:09:59.390 lat (msec) : 2=35.62%, 4=0.04% 00:09:59.390 cpu : usr=1.46%, sys=4.33%, ctx=2703, majf=0, minf=2 00:09:59.390 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.390 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.390 issued rwts: total=2701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.390 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.390 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2011714: Fri Dec 6 13:17:45 2024 00:09:59.390 read: IOPS=24, BW=97.1KiB/s (99.4kB/s)(252KiB/2596msec) 00:09:59.390 slat (nsec): min=26176, max=36031, avg=27072.19, stdev=1238.16 00:09:59.390 clat (usec): min=842, max=42695, avg=41154.33, stdev=5176.23 00:09:59.390 lat (usec): min=878, max=42721, avg=41181.40, stdev=5175.09 00:09:59.390 clat percentiles (usec): 00:09:59.390 | 1.00th=[ 840], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:59.390 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:59.390 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:59.390 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:59.390 | 99.99th=[42730] 00:09:59.390 bw ( KiB/s): min= 96, max= 96, per=1.17%, avg=96.00, stdev= 0.00, samples=5 00:09:59.390 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:09:59.390 lat (usec) : 1000=1.56% 00:09:59.390 lat (msec) : 50=96.88% 00:09:59.390 cpu : usr=0.15%, sys=0.00%, ctx=64, majf=0, minf=2 00:09:59.390 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.390 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.390 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.390 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.390 00:09:59.390 Run status group 0 (all jobs): 00:09:59.390 READ: bw=8204KiB/s (8400kB/s), 97.1KiB/s-3930KiB/s (99.4kB/s-4024kB/s), io=24.9MiB (26.1MB), run=2596-3104msec 00:09:59.390 00:09:59.390 Disk stats (read/write): 00:09:59.390 nvme0n1: ios=576/0, merge=0/0, ticks=2813/0, in_queue=2813, util=94.52% 00:09:59.390 nvme0n2: ios=2998/0, merge=0/0, ticks=2934/0, in_queue=2934, util=93.40% 00:09:59.390 nvme0n3: ios=2575/0, merge=0/0, ticks=2408/0, in_queue=2408, util=96.03% 00:09:59.390 nvme0n4: ios=62/0, merge=0/0, ticks=2553/0, in_queue=2553, util=96.46% 00:09:59.650 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.650 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:59.909 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.909 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:59.909 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.909 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:00.169 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:00.169 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2011499 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:00.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:00.428 nvmf hotplug test: fio failed as expected 00:10:00.428 13:17:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:00.688 rmmod nvme_tcp 00:10:00.688 rmmod nvme_fabrics 00:10:00.688 rmmod nvme_keyring 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2007985 ']' 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2007985 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2007985 ']' 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2007985 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2007985 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2007985' 00:10:00.688 killing process with pid 2007985 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2007985 00:10:00.688 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2007985 00:10:00.949 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:00.949 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:00.949 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:00.949 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:00.949 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:00.949 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:00.949 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:00.949 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:00.949 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:00.949 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.949 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.949 13:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.859 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:02.859 00:10:02.859 real 0m29.330s 00:10:02.859 user 2m37.733s 00:10:02.859 sys 0m9.591s 00:10:02.859 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.859 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.859 ************************************ 00:10:02.859 END TEST nvmf_fio_target 00:10:02.859 ************************************ 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.120 ************************************ 00:10:03.120 START TEST nvmf_bdevio 00:10:03.120 ************************************ 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:03.120 * Looking for test storage... 00:10:03.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.120 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:03.121 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:03.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.382 --rc genhtml_branch_coverage=1 00:10:03.382 --rc genhtml_function_coverage=1 00:10:03.382 --rc genhtml_legend=1 00:10:03.382 --rc geninfo_all_blocks=1 00:10:03.382 --rc geninfo_unexecuted_blocks=1 00:10:03.382 00:10:03.382 ' 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:03.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.382 --rc genhtml_branch_coverage=1 00:10:03.382 --rc genhtml_function_coverage=1 00:10:03.382 --rc genhtml_legend=1 00:10:03.382 --rc geninfo_all_blocks=1 00:10:03.382 --rc geninfo_unexecuted_blocks=1 00:10:03.382 00:10:03.382 ' 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:03.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.382 --rc genhtml_branch_coverage=1 00:10:03.382 --rc genhtml_function_coverage=1 00:10:03.382 --rc genhtml_legend=1 00:10:03.382 --rc geninfo_all_blocks=1 00:10:03.382 --rc geninfo_unexecuted_blocks=1 00:10:03.382 00:10:03.382 ' 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:03.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.382 --rc genhtml_branch_coverage=1 00:10:03.382 --rc genhtml_function_coverage=1 00:10:03.382 --rc genhtml_legend=1 00:10:03.382 --rc geninfo_all_blocks=1 00:10:03.382 --rc geninfo_unexecuted_blocks=1 00:10:03.382 00:10:03.382 ' 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:03.382 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:03.383 13:17:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:11.522 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:11.522 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:11.522 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:11.522 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:11.522 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.523 13:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:11.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:10:11.523 00:10:11.523 --- 10.0.0.2 ping statistics --- 00:10:11.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.523 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:10:11.523 00:10:11.523 --- 10.0.0.1 ping statistics --- 00:10:11.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.523 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2016986 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2016986 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2016986 ']' 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.523 13:17:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.523 [2024-12-06 13:17:57.370618] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:10:11.523 [2024-12-06 13:17:57.370685] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.523 [2024-12-06 13:17:57.471189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.523 [2024-12-06 13:17:57.524543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.523 [2024-12-06 13:17:57.524595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.523 [2024-12-06 13:17:57.524604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.523 [2024-12-06 13:17:57.524611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.523 [2024-12-06 13:17:57.524618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.523 [2024-12-06 13:17:57.526652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:11.523 [2024-12-06 13:17:57.526929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:11.523 [2024-12-06 13:17:57.527088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:11.523 [2024-12-06 13:17:57.527091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.785 [2024-12-06 13:17:58.252781] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.785 Malloc0 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.785 [2024-12-06 13:17:58.329102] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:11.785 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:11.785 { 00:10:11.785 "params": { 00:10:11.785 "name": "Nvme$subsystem", 00:10:11.785 "trtype": "$TEST_TRANSPORT", 00:10:11.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.785 "adrfam": "ipv4", 00:10:11.785 "trsvcid": "$NVMF_PORT", 00:10:11.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.786 "hdgst": ${hdgst:-false}, 00:10:11.786 "ddgst": ${ddgst:-false} 00:10:11.786 }, 00:10:11.786 "method": "bdev_nvme_attach_controller" 00:10:11.786 } 00:10:11.786 EOF 00:10:11.786 )") 00:10:11.786 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:11.786 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:11.786 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:11.786 13:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:11.786 "params": { 00:10:11.786 "name": "Nvme1", 00:10:11.786 "trtype": "tcp", 00:10:11.786 "traddr": "10.0.0.2", 00:10:11.786 "adrfam": "ipv4", 00:10:11.786 "trsvcid": "4420", 00:10:11.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:11.786 "hdgst": false, 00:10:11.786 "ddgst": false 00:10:11.786 }, 00:10:11.786 "method": "bdev_nvme_attach_controller" 00:10:11.786 }' 00:10:11.786 [2024-12-06 13:17:58.385581] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:10:11.786 [2024-12-06 13:17:58.385644] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017079 ] 00:10:12.047 [2024-12-06 13:17:58.481058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.047 [2024-12-06 13:17:58.537501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.047 [2024-12-06 13:17:58.537587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.047 [2024-12-06 13:17:58.537587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.308 I/O targets: 00:10:12.308 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:12.308 00:10:12.308 00:10:12.308 CUnit - A unit testing framework for C - Version 2.1-3 00:10:12.308 http://cunit.sourceforge.net/ 00:10:12.308 00:10:12.308 00:10:12.308 Suite: bdevio tests on: Nvme1n1 00:10:12.308 Test: blockdev write read block ...passed 00:10:12.308 Test: blockdev write zeroes read block ...passed 00:10:12.308 Test: blockdev write zeroes read no split ...passed 00:10:12.308 Test: blockdev write zeroes read split ...passed 00:10:12.308 Test: blockdev write zeroes read split partial ...passed 00:10:12.308 Test: blockdev reset ...[2024-12-06 13:17:58.951552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:12.308 [2024-12-06 13:17:58.951653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb3580 (9): Bad file descriptor 00:10:12.568 [2024-12-06 13:17:59.009308] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:12.568 passed 00:10:12.568 Test: blockdev write read 8 blocks ...passed 00:10:12.568 Test: blockdev write read size > 128k ...passed 00:10:12.568 Test: blockdev write read invalid size ...passed 00:10:12.568 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:12.568 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:12.568 Test: blockdev write read max offset ...passed 00:10:12.568 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:12.568 Test: blockdev writev readv 8 blocks ...passed 00:10:12.568 Test: blockdev writev readv 30 x 1block ...passed 00:10:12.569 Test: blockdev writev readv block ...passed 00:10:12.569 Test: blockdev writev readv size > 128k ...passed 00:10:12.569 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:12.569 Test: blockdev comparev and writev ...[2024-12-06 13:17:59.188647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.569 [2024-12-06 13:17:59.188699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:12.569 [2024-12-06 13:17:59.188716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.569 [2024-12-06 13:17:59.188725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:12.569 [2024-12-06 13:17:59.189096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.569 [2024-12-06 13:17:59.189112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:12.569 [2024-12-06 13:17:59.189127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.569 [2024-12-06 13:17:59.189137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:12.569 [2024-12-06 13:17:59.189537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.569 [2024-12-06 13:17:59.189552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:12.569 [2024-12-06 13:17:59.189566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.569 [2024-12-06 13:17:59.189583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:12.569 [2024-12-06 13:17:59.189973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.569 [2024-12-06 13:17:59.189986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:12.569 [2024-12-06 13:17:59.190000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:12.569 [2024-12-06 13:17:59.190009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:12.829 passed 00:10:12.829 Test: blockdev nvme passthru rw ...passed 00:10:12.829 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:17:59.273915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.829 [2024-12-06 13:17:59.273935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:12.829 [2024-12-06 13:17:59.274150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.829 [2024-12-06 13:17:59.274162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:12.829 [2024-12-06 13:17:59.274379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.830 [2024-12-06 13:17:59.274391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:12.830 [2024-12-06 13:17:59.274610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:12.830 [2024-12-06 13:17:59.274626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:12.830 passed 00:10:12.830 Test: blockdev nvme admin passthru ...passed 00:10:12.830 Test: blockdev copy ...passed 00:10:12.830 00:10:12.830 Run Summary: Type Total Ran Passed Failed Inactive 00:10:12.830 suites 1 1 n/a 0 0 00:10:12.830 tests 23 23 23 0 0 00:10:12.830 asserts 152 152 152 0 n/a 00:10:12.830 00:10:12.830 Elapsed time = 1.015 seconds 00:10:12.830 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.830 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.830 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:12.830 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.830 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:12.830 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:12.830 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.830 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:12.830 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.830 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:12.830 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.830 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.830 rmmod nvme_tcp 00:10:12.830 rmmod nvme_fabrics 00:10:13.090 rmmod nvme_keyring 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2016986 ']' 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2016986 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2016986 ']' 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2016986 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2016986 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2016986' 00:10:13.090 killing process with pid 2016986 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2016986 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2016986 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.090 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:13.091 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:13.091 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.091 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.091 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.091 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:13.091 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.091 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.091 13:17:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.634 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.634 00:10:15.634 real 0m12.214s 00:10:15.634 user 0m13.263s 00:10:15.634 sys 0m6.205s 00:10:15.634 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.634 13:18:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:15.634 ************************************ 00:10:15.634 END TEST nvmf_bdevio 00:10:15.634 ************************************ 00:10:15.634 13:18:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:15.634 00:10:15.634 real 5m5.676s 00:10:15.634 user 11m51.891s 00:10:15.634 sys 1m51.131s 00:10:15.634 13:18:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.634 13:18:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.634 ************************************ 00:10:15.634 END TEST nvmf_target_core 00:10:15.634 ************************************ 00:10:15.634 13:18:01 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:15.634 13:18:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.634 13:18:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.634 13:18:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:15.634 ************************************ 00:10:15.634 START TEST nvmf_target_extra 00:10:15.634 ************************************ 00:10:15.634 13:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:15.634 * Looking for test storage... 00:10:15.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.634 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:15.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.635 --rc genhtml_branch_coverage=1 00:10:15.635 --rc genhtml_function_coverage=1 00:10:15.635 --rc genhtml_legend=1 00:10:15.635 --rc geninfo_all_blocks=1 00:10:15.635 --rc geninfo_unexecuted_blocks=1 00:10:15.635 00:10:15.635 ' 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:15.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.635 --rc genhtml_branch_coverage=1 00:10:15.635 --rc genhtml_function_coverage=1 00:10:15.635 --rc genhtml_legend=1 00:10:15.635 --rc geninfo_all_blocks=1 00:10:15.635 --rc geninfo_unexecuted_blocks=1 00:10:15.635 00:10:15.635 ' 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:15.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.635 --rc genhtml_branch_coverage=1 00:10:15.635 --rc genhtml_function_coverage=1 00:10:15.635 --rc genhtml_legend=1 00:10:15.635 --rc geninfo_all_blocks=1 00:10:15.635 --rc geninfo_unexecuted_blocks=1 00:10:15.635 00:10:15.635 ' 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:15.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.635 --rc genhtml_branch_coverage=1 00:10:15.635 --rc genhtml_function_coverage=1 00:10:15.635 --rc genhtml_legend=1 00:10:15.635 --rc geninfo_all_blocks=1 00:10:15.635 --rc geninfo_unexecuted_blocks=1 00:10:15.635 00:10:15.635 ' 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:15.635 ************************************ 00:10:15.635 START TEST nvmf_example 00:10:15.635 ************************************ 00:10:15.635 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:15.635 * Looking for test storage... 00:10:15.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:15.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.898 --rc genhtml_branch_coverage=1 00:10:15.898 --rc genhtml_function_coverage=1 00:10:15.898 --rc genhtml_legend=1 00:10:15.898 --rc geninfo_all_blocks=1 00:10:15.898 --rc geninfo_unexecuted_blocks=1 00:10:15.898 00:10:15.898 ' 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:15.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.898 --rc genhtml_branch_coverage=1 00:10:15.898 --rc genhtml_function_coverage=1 00:10:15.898 --rc genhtml_legend=1 00:10:15.898 --rc geninfo_all_blocks=1 00:10:15.898 --rc geninfo_unexecuted_blocks=1 00:10:15.898 00:10:15.898 ' 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:15.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.898 --rc genhtml_branch_coverage=1 00:10:15.898 --rc genhtml_function_coverage=1 00:10:15.898 --rc genhtml_legend=1 00:10:15.898 --rc geninfo_all_blocks=1 00:10:15.898 --rc geninfo_unexecuted_blocks=1 00:10:15.898 00:10:15.898 ' 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:15.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.898 --rc genhtml_branch_coverage=1 00:10:15.898 --rc genhtml_function_coverage=1 00:10:15.898 --rc genhtml_legend=1 00:10:15.898 --rc geninfo_all_blocks=1 00:10:15.898 --rc geninfo_unexecuted_blocks=1 00:10:15.898 00:10:15.898 ' 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:15.898 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:15.899 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:24.039 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:24.039 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:24.039 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:24.039 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:24.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:10:24.039 00:10:24.039 --- 10.0.0.2 ping statistics --- 00:10:24.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.039 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:10:24.039 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:24.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:10:24.039 00:10:24.039 --- 10.0.0.1 ping statistics --- 00:10:24.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.040 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:10:24.040 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.040 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:24.040 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:24.040 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.040 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:24.040 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:24.040 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.040 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:24.040 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:24.040 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:24.040 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:24.040 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:24.040 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.040 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:24.040 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:24.040 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2022143 00:10:24.040 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:24.040 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:24.040 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2022143 00:10:24.040 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2022143 ']' 00:10:24.040 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.040 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.040 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.040 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.040 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.302 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.302 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:24.302 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:24.302 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:24.302 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.302 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:24.302 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.302 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.302 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.302 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:24.302 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.302 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.564 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:24.564 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:36.793 Initializing NVMe Controllers 00:10:36.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:36.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:36.793 Initialization complete. Launching workers. 00:10:36.793 ======================================================== 00:10:36.793 Latency(us) 00:10:36.793 Device Information : IOPS MiB/s Average min max 00:10:36.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18991.32 74.18 3369.58 628.12 19016.00 00:10:36.793 ======================================================== 00:10:36.793 Total : 18991.32 74.18 3369.58 628.12 19016.00 00:10:36.793 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.793 rmmod nvme_tcp 00:10:36.793 rmmod nvme_fabrics 00:10:36.793 rmmod nvme_keyring 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2022143 ']' 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2022143 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2022143 ']' 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2022143 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2022143 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2022143' 00:10:36.793 killing process with pid 2022143 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2022143 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2022143 00:10:36.793 nvmf threads initialize successfully 00:10:36.793 bdev subsystem init successfully 00:10:36.793 created a nvmf target service 00:10:36.793 create targets's poll groups done 00:10:36.793 all subsystems of target started 00:10:36.793 nvmf target is running 00:10:36.793 all subsystems of target stopped 00:10:36.793 destroy targets's poll groups done 00:10:36.793 destroyed the nvmf target service 00:10:36.793 bdev subsystem finish successfully 00:10:36.793 nvmf threads destroy successfully 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.793 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.053 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.053 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:37.053 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.053 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:37.313 00:10:37.313 real 0m21.558s 00:10:37.313 user 0m46.875s 00:10:37.313 sys 0m7.134s 00:10:37.313 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.313 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:37.313 ************************************ 00:10:37.313 END TEST nvmf_example 00:10:37.313 ************************************ 00:10:37.313 13:18:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:37.313 13:18:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.313 13:18:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.313 13:18:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:37.313 ************************************ 00:10:37.313 START TEST nvmf_filesystem 00:10:37.313 ************************************ 00:10:37.313 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:37.313 * Looking for test storage... 00:10:37.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.313 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:37.313 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:37.313 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:37.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.577 --rc genhtml_branch_coverage=1 00:10:37.577 --rc genhtml_function_coverage=1 00:10:37.577 --rc genhtml_legend=1 00:10:37.577 --rc geninfo_all_blocks=1 00:10:37.577 --rc geninfo_unexecuted_blocks=1 00:10:37.577 00:10:37.577 ' 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:37.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.577 --rc genhtml_branch_coverage=1 00:10:37.577 --rc genhtml_function_coverage=1 00:10:37.577 --rc genhtml_legend=1 00:10:37.577 --rc geninfo_all_blocks=1 00:10:37.577 --rc geninfo_unexecuted_blocks=1 00:10:37.577 00:10:37.577 ' 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:37.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.577 --rc genhtml_branch_coverage=1 00:10:37.577 --rc genhtml_function_coverage=1 00:10:37.577 --rc genhtml_legend=1 00:10:37.577 --rc geninfo_all_blocks=1 00:10:37.577 --rc geninfo_unexecuted_blocks=1 00:10:37.577 00:10:37.577 ' 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:37.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.577 --rc genhtml_branch_coverage=1 00:10:37.577 --rc genhtml_function_coverage=1 00:10:37.577 --rc genhtml_legend=1 00:10:37.577 --rc geninfo_all_blocks=1 00:10:37.577 --rc geninfo_unexecuted_blocks=1 00:10:37.577 00:10:37.577 ' 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:37.577 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:37.578 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:37.578 #define SPDK_CONFIG_H 00:10:37.578 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:37.578 #define SPDK_CONFIG_APPS 1 00:10:37.578 #define SPDK_CONFIG_ARCH native 00:10:37.578 #undef SPDK_CONFIG_ASAN 00:10:37.578 #undef SPDK_CONFIG_AVAHI 00:10:37.578 #undef SPDK_CONFIG_CET 00:10:37.578 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:37.578 #define SPDK_CONFIG_COVERAGE 1 00:10:37.578 #define SPDK_CONFIG_CROSS_PREFIX 00:10:37.578 #undef SPDK_CONFIG_CRYPTO 00:10:37.578 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:37.578 #undef SPDK_CONFIG_CUSTOMOCF 00:10:37.578 #undef SPDK_CONFIG_DAOS 00:10:37.578 #define SPDK_CONFIG_DAOS_DIR 00:10:37.578 #define SPDK_CONFIG_DEBUG 1 00:10:37.578 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:37.578 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:37.578 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:37.578 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:37.578 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:37.578 #undef SPDK_CONFIG_DPDK_UADK 00:10:37.578 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:37.578 #define SPDK_CONFIG_EXAMPLES 1 00:10:37.578 #undef SPDK_CONFIG_FC 00:10:37.578 #define SPDK_CONFIG_FC_PATH 00:10:37.578 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:37.578 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:37.578 #define SPDK_CONFIG_FSDEV 1 00:10:37.578 #undef SPDK_CONFIG_FUSE 00:10:37.578 #undef SPDK_CONFIG_FUZZER 00:10:37.578 #define SPDK_CONFIG_FUZZER_LIB 00:10:37.578 #undef SPDK_CONFIG_GOLANG 00:10:37.578 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:37.578 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:37.578 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:37.578 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:37.578 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:37.578 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:37.578 #undef SPDK_CONFIG_HAVE_LZ4 00:10:37.578 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:37.578 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:37.578 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:37.578 #define SPDK_CONFIG_IDXD 1 00:10:37.578 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:37.578 #undef SPDK_CONFIG_IPSEC_MB 00:10:37.578 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:37.578 #define SPDK_CONFIG_ISAL 1 00:10:37.578 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:37.578 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:37.578 #define SPDK_CONFIG_LIBDIR 00:10:37.578 #undef SPDK_CONFIG_LTO 00:10:37.578 #define SPDK_CONFIG_MAX_LCORES 128 00:10:37.578 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:37.578 #define SPDK_CONFIG_NVME_CUSE 1 00:10:37.578 #undef SPDK_CONFIG_OCF 00:10:37.578 #define SPDK_CONFIG_OCF_PATH 00:10:37.578 #define SPDK_CONFIG_OPENSSL_PATH 00:10:37.578 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:37.578 #define SPDK_CONFIG_PGO_DIR 00:10:37.578 #undef SPDK_CONFIG_PGO_USE 00:10:37.578 #define SPDK_CONFIG_PREFIX /usr/local 00:10:37.578 #undef SPDK_CONFIG_RAID5F 00:10:37.578 #undef SPDK_CONFIG_RBD 00:10:37.578 #define SPDK_CONFIG_RDMA 1 00:10:37.578 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:37.578 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:37.578 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:37.578 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:37.578 #define SPDK_CONFIG_SHARED 1 00:10:37.578 #undef SPDK_CONFIG_SMA 00:10:37.578 #define SPDK_CONFIG_TESTS 1 00:10:37.578 #undef SPDK_CONFIG_TSAN 00:10:37.578 #define SPDK_CONFIG_UBLK 1 00:10:37.578 #define SPDK_CONFIG_UBSAN 1 00:10:37.578 #undef SPDK_CONFIG_UNIT_TESTS 00:10:37.578 #undef SPDK_CONFIG_URING 00:10:37.578 #define SPDK_CONFIG_URING_PATH 00:10:37.579 #undef SPDK_CONFIG_URING_ZNS 00:10:37.579 #undef SPDK_CONFIG_USDT 00:10:37.579 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:37.579 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:37.579 #define SPDK_CONFIG_VFIO_USER 1 00:10:37.579 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:37.579 #define SPDK_CONFIG_VHOST 1 00:10:37.579 #define SPDK_CONFIG_VIRTIO 1 00:10:37.579 #undef SPDK_CONFIG_VTUNE 00:10:37.579 #define SPDK_CONFIG_VTUNE_DIR 00:10:37.579 #define SPDK_CONFIG_WERROR 1 00:10:37.579 #define SPDK_CONFIG_WPDK_DIR 00:10:37.579 #undef SPDK_CONFIG_XNVME 00:10:37.579 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:37.579 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:37.580 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2025161 ]] 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2025161 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.SDC69j 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.SDC69j/tests/target /tmp/spdk.SDC69j 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:37.581 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122612539392 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356529664 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6743990272 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668233728 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678264832 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847943168 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871306752 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23363584 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677539840 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678264832 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=724992 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:37.582 * Looking for test storage... 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122612539392 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8958582784 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:37.582 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.844 --rc genhtml_branch_coverage=1 00:10:37.844 --rc genhtml_function_coverage=1 00:10:37.844 --rc genhtml_legend=1 00:10:37.844 --rc geninfo_all_blocks=1 00:10:37.844 --rc geninfo_unexecuted_blocks=1 00:10:37.844 00:10:37.844 ' 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.844 --rc genhtml_branch_coverage=1 00:10:37.844 --rc genhtml_function_coverage=1 00:10:37.844 --rc genhtml_legend=1 00:10:37.844 --rc geninfo_all_blocks=1 00:10:37.844 --rc geninfo_unexecuted_blocks=1 00:10:37.844 00:10:37.844 ' 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.844 --rc genhtml_branch_coverage=1 00:10:37.844 --rc genhtml_function_coverage=1 00:10:37.844 --rc genhtml_legend=1 00:10:37.844 --rc geninfo_all_blocks=1 00:10:37.844 --rc geninfo_unexecuted_blocks=1 00:10:37.844 00:10:37.844 ' 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.844 --rc genhtml_branch_coverage=1 00:10:37.844 --rc genhtml_function_coverage=1 00:10:37.844 --rc genhtml_legend=1 00:10:37.844 --rc geninfo_all_blocks=1 00:10:37.844 --rc geninfo_unexecuted_blocks=1 00:10:37.844 00:10:37.844 ' 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:37.844 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.845 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:45.992 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:45.992 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:45.992 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:45.992 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.992 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:45.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:10:45.993 00:10:45.993 --- 10.0.0.2 ping statistics --- 00:10:45.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.993 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:10:45.993 00:10:45.993 --- 10.0.0.1 ping statistics --- 00:10:45.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.993 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.993 ************************************ 00:10:45.993 START TEST nvmf_filesystem_no_in_capsule 00:10:45.993 ************************************ 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2028870 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2028870 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2028870 ']' 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.993 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.993 [2024-12-06 13:18:31.904582] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:10:45.993 [2024-12-06 13:18:31.904646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.993 [2024-12-06 13:18:32.002863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.993 [2024-12-06 13:18:32.056738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.993 [2024-12-06 13:18:32.056791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.993 [2024-12-06 13:18:32.056799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.993 [2024-12-06 13:18:32.056807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.993 [2024-12-06 13:18:32.056813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.993 [2024-12-06 13:18:32.059199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.993 [2024-12-06 13:18:32.059359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.993 [2024-12-06 13:18:32.059524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.993 [2024-12-06 13:18:32.059524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.254 [2024-12-06 13:18:32.785495] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.254 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.254 Malloc1 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.516 [2024-12-06 13:18:32.943668] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:46.516 { 00:10:46.516 "name": "Malloc1", 00:10:46.516 "aliases": [ 00:10:46.516 "60844d7c-3674-4c96-911e-b45914efb6e9" 00:10:46.516 ], 00:10:46.516 "product_name": "Malloc disk", 00:10:46.516 "block_size": 512, 00:10:46.516 "num_blocks": 1048576, 00:10:46.516 "uuid": "60844d7c-3674-4c96-911e-b45914efb6e9", 00:10:46.516 "assigned_rate_limits": { 00:10:46.516 "rw_ios_per_sec": 0, 00:10:46.516 "rw_mbytes_per_sec": 0, 00:10:46.516 "r_mbytes_per_sec": 0, 00:10:46.516 "w_mbytes_per_sec": 0 00:10:46.516 }, 00:10:46.516 "claimed": true, 00:10:46.516 "claim_type": "exclusive_write", 00:10:46.516 "zoned": false, 00:10:46.516 "supported_io_types": { 00:10:46.516 "read": true, 00:10:46.516 "write": true, 00:10:46.516 "unmap": true, 00:10:46.516 "flush": true, 00:10:46.516 "reset": true, 00:10:46.516 "nvme_admin": false, 00:10:46.516 "nvme_io": false, 00:10:46.516 "nvme_io_md": false, 00:10:46.516 "write_zeroes": true, 00:10:46.516 "zcopy": true, 00:10:46.516 "get_zone_info": false, 00:10:46.516 "zone_management": false, 00:10:46.516 "zone_append": false, 00:10:46.516 "compare": false, 00:10:46.516 "compare_and_write": false, 00:10:46.516 "abort": true, 00:10:46.516 "seek_hole": false, 00:10:46.516 "seek_data": false, 00:10:46.516 "copy": true, 00:10:46.516 "nvme_iov_md": false 00:10:46.516 }, 00:10:46.516 "memory_domains": [ 00:10:46.516 { 00:10:46.516 "dma_device_id": "system", 00:10:46.516 "dma_device_type": 1 00:10:46.516 }, 00:10:46.516 { 00:10:46.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.516 "dma_device_type": 2 00:10:46.516 } 00:10:46.516 ], 00:10:46.516 "driver_specific": {} 00:10:46.516 } 00:10:46.516 ]' 00:10:46.516 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:46.516 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:46.516 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:46.516 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:46.516 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:46.516 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:46.516 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:46.516 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:47.896 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:47.896 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:47.896 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.896 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:47.896 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:50.436 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:51.005 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.945 ************************************ 00:10:51.945 START TEST filesystem_ext4 00:10:51.945 ************************************ 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:51.945 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:51.945 mke2fs 1.47.0 (5-Feb-2023) 00:10:51.945 Discarding device blocks: 0/522240 done 00:10:52.205 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:52.205 Filesystem UUID: 22e347e7-3092-4c74-88ec-496bb66348ec 00:10:52.205 Superblock backups stored on blocks: 00:10:52.205 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:52.205 00:10:52.205 Allocating group tables: 0/64 done 00:10:52.205 Writing inode tables: 0/64 done 00:10:54.743 Creating journal (8192 blocks): done 00:10:54.743 Writing superblocks and filesystem accounting information: 0/64 done 00:10:54.743 00:10:54.743 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:54.743 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2028870 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:01.339 00:11:01.339 real 0m8.278s 00:11:01.339 user 0m0.026s 00:11:01.339 sys 0m0.058s 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:01.339 ************************************ 00:11:01.339 END TEST filesystem_ext4 00:11:01.339 ************************************ 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.339 ************************************ 00:11:01.339 START TEST filesystem_btrfs 00:11:01.339 ************************************ 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:01.339 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:01.340 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:01.340 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:01.340 btrfs-progs v6.8.1 00:11:01.340 See https://btrfs.readthedocs.io for more information. 00:11:01.340 00:11:01.340 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:01.340 NOTE: several default settings have changed in version 5.15, please make sure 00:11:01.340 this does not affect your deployments: 00:11:01.340 - DUP for metadata (-m dup) 00:11:01.340 - enabled no-holes (-O no-holes) 00:11:01.340 - enabled free-space-tree (-R free-space-tree) 00:11:01.340 00:11:01.340 Label: (null) 00:11:01.340 UUID: 42a14fe6-7c14-4180-9743-a1da344585a2 00:11:01.340 Node size: 16384 00:11:01.340 Sector size: 4096 (CPU page size: 4096) 00:11:01.340 Filesystem size: 510.00MiB 00:11:01.340 Block group profiles: 00:11:01.340 Data: single 8.00MiB 00:11:01.340 Metadata: DUP 32.00MiB 00:11:01.340 System: DUP 8.00MiB 00:11:01.340 SSD detected: yes 00:11:01.340 Zoned device: no 00:11:01.340 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:01.340 Checksum: crc32c 00:11:01.340 Number of devices: 1 00:11:01.340 Devices: 00:11:01.340 ID SIZE PATH 00:11:01.340 1 510.00MiB /dev/nvme0n1p1 00:11:01.340 00:11:01.340 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:01.340 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:01.601 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:01.601 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:01.601 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:01.601 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:01.601 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:01.601 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2028870 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:01.862 00:11:01.862 real 0m1.411s 00:11:01.862 user 0m0.022s 00:11:01.862 sys 0m0.069s 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:01.862 ************************************ 00:11:01.862 END TEST filesystem_btrfs 00:11:01.862 ************************************ 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.862 ************************************ 00:11:01.862 START TEST filesystem_xfs 00:11:01.862 ************************************ 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:01.862 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:02.435 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:02.435 = sectsz=512 attr=2, projid32bit=1 00:11:02.435 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:02.435 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:02.435 data = bsize=4096 blocks=130560, imaxpct=25 00:11:02.435 = sunit=0 swidth=0 blks 00:11:02.435 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:02.435 log =internal log bsize=4096 blocks=16384, version=2 00:11:02.435 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:02.435 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:03.377 Discarding blocks...Done. 00:11:03.377 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:03.377 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2028870 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:05.923 00:11:05.923 real 0m3.992s 00:11:05.923 user 0m0.025s 00:11:05.923 sys 0m0.058s 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:05.923 ************************************ 00:11:05.923 END TEST filesystem_xfs 00:11:05.923 ************************************ 00:11:05.923 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2028870 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2028870 ']' 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2028870 00:11:06.184 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:06.185 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.185 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2028870 00:11:06.445 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.445 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.445 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2028870' 00:11:06.445 killing process with pid 2028870 00:11:06.445 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2028870 00:11:06.445 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2028870 00:11:06.445 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:06.445 00:11:06.445 real 0m21.205s 00:11:06.445 user 1m23.799s 00:11:06.445 sys 0m1.360s 00:11:06.445 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.445 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.445 ************************************ 00:11:06.445 END TEST nvmf_filesystem_no_in_capsule 00:11:06.445 ************************************ 00:11:06.445 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:06.445 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.445 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.445 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:06.706 ************************************ 00:11:06.706 START TEST nvmf_filesystem_in_capsule 00:11:06.706 ************************************ 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2033371 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2033371 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2033371 ']' 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.706 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.706 [2024-12-06 13:18:53.193132] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:11:06.706 [2024-12-06 13:18:53.193207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.706 [2024-12-06 13:18:53.286894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.706 [2024-12-06 13:18:53.321804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.706 [2024-12-06 13:18:53.321834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.706 [2024-12-06 13:18:53.321840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.706 [2024-12-06 13:18:53.321844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.706 [2024-12-06 13:18:53.321849] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.706 [2024-12-06 13:18:53.323148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.706 [2024-12-06 13:18:53.323299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.706 [2024-12-06 13:18:53.323449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.706 [2024-12-06 13:18:53.323451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.649 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.649 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:07.649 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.649 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.649 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.649 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.649 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:07.649 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:07.649 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.649 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.650 [2024-12-06 13:18:54.043368] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.650 Malloc1 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.650 [2024-12-06 13:18:54.175239] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:07.650 { 00:11:07.650 "name": "Malloc1", 00:11:07.650 "aliases": [ 00:11:07.650 "d9533180-e66e-4741-8ccc-5f159382553b" 00:11:07.650 ], 00:11:07.650 "product_name": "Malloc disk", 00:11:07.650 "block_size": 512, 00:11:07.650 "num_blocks": 1048576, 00:11:07.650 "uuid": "d9533180-e66e-4741-8ccc-5f159382553b", 00:11:07.650 "assigned_rate_limits": { 00:11:07.650 "rw_ios_per_sec": 0, 00:11:07.650 "rw_mbytes_per_sec": 0, 00:11:07.650 "r_mbytes_per_sec": 0, 00:11:07.650 "w_mbytes_per_sec": 0 00:11:07.650 }, 00:11:07.650 "claimed": true, 00:11:07.650 "claim_type": "exclusive_write", 00:11:07.650 "zoned": false, 00:11:07.650 "supported_io_types": { 00:11:07.650 "read": true, 00:11:07.650 "write": true, 00:11:07.650 "unmap": true, 00:11:07.650 "flush": true, 00:11:07.650 "reset": true, 00:11:07.650 "nvme_admin": false, 00:11:07.650 "nvme_io": false, 00:11:07.650 "nvme_io_md": false, 00:11:07.650 "write_zeroes": true, 00:11:07.650 "zcopy": true, 00:11:07.650 "get_zone_info": false, 00:11:07.650 "zone_management": false, 00:11:07.650 "zone_append": false, 00:11:07.650 "compare": false, 00:11:07.650 "compare_and_write": false, 00:11:07.650 "abort": true, 00:11:07.650 "seek_hole": false, 00:11:07.650 "seek_data": false, 00:11:07.650 "copy": true, 00:11:07.650 "nvme_iov_md": false 00:11:07.650 }, 00:11:07.650 "memory_domains": [ 00:11:07.650 { 00:11:07.650 "dma_device_id": "system", 00:11:07.650 "dma_device_type": 1 00:11:07.650 }, 00:11:07.650 { 00:11:07.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.650 "dma_device_type": 2 00:11:07.650 } 00:11:07.650 ], 00:11:07.650 "driver_specific": {} 00:11:07.650 } 00:11:07.650 ]' 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:07.650 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.576 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:09.576 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:09.576 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.576 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:09.576 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:11.494 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:11.756 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:12.017 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:12.961 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:12.961 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:12.961 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:12.961 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.961 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.223 ************************************ 00:11:13.223 START TEST filesystem_in_capsule_ext4 00:11:13.223 ************************************ 00:11:13.223 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:13.223 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:13.223 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:13.223 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:13.223 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:13.223 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:13.223 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:13.223 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:13.223 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:13.223 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:13.223 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:13.223 mke2fs 1.47.0 (5-Feb-2023) 00:11:13.223 Discarding device blocks: 0/522240 done 00:11:13.223 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:13.223 Filesystem UUID: df621791-6bd7-4785-a886-4cd1023f6ecc 00:11:13.223 Superblock backups stored on blocks: 00:11:13.223 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:13.223 00:11:13.223 Allocating group tables: 0/64 done 00:11:13.223 Writing inode tables: 0/64 done 00:11:16.522 Creating journal (8192 blocks): done 00:11:18.424 Writing superblocks and filesystem accounting information: 0/64 done 00:11:18.424 00:11:18.424 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:18.424 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:23.706 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2033371 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:23.969 00:11:23.969 real 0m10.809s 00:11:23.969 user 0m0.033s 00:11:23.969 sys 0m0.054s 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:23.969 ************************************ 00:11:23.969 END TEST filesystem_in_capsule_ext4 00:11:23.969 ************************************ 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.969 ************************************ 00:11:23.969 START TEST filesystem_in_capsule_btrfs 00:11:23.969 ************************************ 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:23.969 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:24.254 btrfs-progs v6.8.1 00:11:24.254 See https://btrfs.readthedocs.io for more information. 00:11:24.254 00:11:24.254 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:24.254 NOTE: several default settings have changed in version 5.15, please make sure 00:11:24.254 this does not affect your deployments: 00:11:24.254 - DUP for metadata (-m dup) 00:11:24.254 - enabled no-holes (-O no-holes) 00:11:24.254 - enabled free-space-tree (-R free-space-tree) 00:11:24.254 00:11:24.254 Label: (null) 00:11:24.254 UUID: e8d2cd18-fed8-475d-89f9-f5e22c451836 00:11:24.254 Node size: 16384 00:11:24.254 Sector size: 4096 (CPU page size: 4096) 00:11:24.254 Filesystem size: 510.00MiB 00:11:24.254 Block group profiles: 00:11:24.254 Data: single 8.00MiB 00:11:24.254 Metadata: DUP 32.00MiB 00:11:24.254 System: DUP 8.00MiB 00:11:24.254 SSD detected: yes 00:11:24.254 Zoned device: no 00:11:24.254 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:24.254 Checksum: crc32c 00:11:24.254 Number of devices: 1 00:11:24.254 Devices: 00:11:24.254 ID SIZE PATH 00:11:24.254 1 510.00MiB /dev/nvme0n1p1 00:11:24.254 00:11:24.254 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:24.254 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:24.516 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:24.516 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:24.516 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:24.516 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:24.516 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:24.516 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:24.516 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2033371 00:11:24.516 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:24.516 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:24.516 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:24.516 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:24.516 00:11:24.516 real 0m0.475s 00:11:24.516 user 0m0.024s 00:11:24.516 sys 0m0.065s 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:24.516 ************************************ 00:11:24.516 END TEST filesystem_in_capsule_btrfs 00:11:24.516 ************************************ 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.516 ************************************ 00:11:24.516 START TEST filesystem_in_capsule_xfs 00:11:24.516 ************************************ 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:24.516 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:24.516 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:24.516 = sectsz=512 attr=2, projid32bit=1 00:11:24.516 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:24.516 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:24.516 data = bsize=4096 blocks=130560, imaxpct=25 00:11:24.516 = sunit=0 swidth=0 blks 00:11:24.516 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:24.516 log =internal log bsize=4096 blocks=16384, version=2 00:11:24.516 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:24.516 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:25.458 Discarding blocks...Done. 00:11:25.458 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:25.458 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.370 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.631 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:27.631 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.631 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:27.631 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:27.631 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.631 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2033371 00:11:27.632 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.632 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.632 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.632 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.632 00:11:27.632 real 0m3.014s 00:11:27.632 user 0m0.026s 00:11:27.632 sys 0m0.055s 00:11:27.632 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.632 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:27.632 ************************************ 00:11:27.632 END TEST filesystem_in_capsule_xfs 00:11:27.632 ************************************ 00:11:27.632 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:27.632 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:27.632 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.632 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:27.632 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:27.632 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:27.632 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2033371 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2033371 ']' 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2033371 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2033371 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.893 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2033371' 00:11:27.893 killing process with pid 2033371 00:11:27.894 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2033371 00:11:27.894 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2033371 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:28.156 00:11:28.156 real 0m21.447s 00:11:28.156 user 1m24.897s 00:11:28.156 sys 0m1.248s 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.156 ************************************ 00:11:28.156 END TEST nvmf_filesystem_in_capsule 00:11:28.156 ************************************ 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.156 rmmod nvme_tcp 00:11:28.156 rmmod nvme_fabrics 00:11:28.156 rmmod nvme_keyring 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.156 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.267 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:30.267 00:11:30.267 real 0m52.929s 00:11:30.267 user 2m51.052s 00:11:30.267 sys 0m8.500s 00:11:30.267 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.267 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 ************************************ 00:11:30.267 END TEST nvmf_filesystem 00:11:30.267 ************************************ 00:11:30.267 13:19:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:30.267 13:19:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.267 13:19:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.267 13:19:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:30.267 ************************************ 00:11:30.267 START TEST nvmf_target_discovery 00:11:30.267 ************************************ 00:11:30.267 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:30.543 * Looking for test storage... 00:11:30.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.543 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:30.543 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:30.543 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:30.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.543 --rc genhtml_branch_coverage=1 00:11:30.543 --rc genhtml_function_coverage=1 00:11:30.543 --rc genhtml_legend=1 00:11:30.543 --rc geninfo_all_blocks=1 00:11:30.543 --rc geninfo_unexecuted_blocks=1 00:11:30.543 00:11:30.543 ' 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:30.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.543 --rc genhtml_branch_coverage=1 00:11:30.543 --rc genhtml_function_coverage=1 00:11:30.543 --rc genhtml_legend=1 00:11:30.543 --rc geninfo_all_blocks=1 00:11:30.543 --rc geninfo_unexecuted_blocks=1 00:11:30.543 00:11:30.543 ' 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:30.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.543 --rc genhtml_branch_coverage=1 00:11:30.543 --rc genhtml_function_coverage=1 00:11:30.543 --rc genhtml_legend=1 00:11:30.543 --rc geninfo_all_blocks=1 00:11:30.543 --rc geninfo_unexecuted_blocks=1 00:11:30.543 00:11:30.543 ' 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:30.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.543 --rc genhtml_branch_coverage=1 00:11:30.543 --rc genhtml_function_coverage=1 00:11:30.543 --rc genhtml_legend=1 00:11:30.543 --rc geninfo_all_blocks=1 00:11:30.543 --rc geninfo_unexecuted_blocks=1 00:11:30.543 00:11:30.543 ' 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.543 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:30.544 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:38.690 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:38.690 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:38.690 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:38.691 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:38.691 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:38.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:11:38.691 00:11:38.691 --- 10.0.0.2 ping statistics --- 00:11:38.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.691 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:11:38.691 00:11:38.691 --- 10.0.0.1 ping statistics --- 00:11:38.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.691 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2041976 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2041976 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2041976 ']' 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.691 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.691 [2024-12-06 13:19:24.694188] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:11:38.691 [2024-12-06 13:19:24.694256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.691 [2024-12-06 13:19:24.794414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.691 [2024-12-06 13:19:24.846989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.691 [2024-12-06 13:19:24.847040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.691 [2024-12-06 13:19:24.847049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.691 [2024-12-06 13:19:24.847057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.691 [2024-12-06 13:19:24.847063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.691 [2024-12-06 13:19:24.849478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.691 [2024-12-06 13:19:24.849624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.691 [2024-12-06 13:19:24.849857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.691 [2024-12-06 13:19:24.849859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.952 [2024-12-06 13:19:25.571409] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:38.952 Null1 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.952 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.213 [2024-12-06 13:19:25.639668] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.213 Null2 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.213 Null3 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.213 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.214 Null4 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.214 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:39.474 00:11:39.474 Discovery Log Number of Records 6, Generation counter 6 00:11:39.474 =====Discovery Log Entry 0====== 00:11:39.474 trtype: tcp 00:11:39.474 adrfam: ipv4 00:11:39.474 subtype: current discovery subsystem 00:11:39.474 treq: not required 00:11:39.474 portid: 0 00:11:39.474 trsvcid: 4420 00:11:39.474 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:39.474 traddr: 10.0.0.2 00:11:39.474 eflags: explicit discovery connections, duplicate discovery information 00:11:39.474 sectype: none 00:11:39.474 =====Discovery Log Entry 1====== 00:11:39.474 trtype: tcp 00:11:39.474 adrfam: ipv4 00:11:39.474 subtype: nvme subsystem 00:11:39.474 treq: not required 00:11:39.474 portid: 0 00:11:39.474 trsvcid: 4420 00:11:39.474 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:39.474 traddr: 10.0.0.2 00:11:39.474 eflags: none 00:11:39.474 sectype: none 00:11:39.474 =====Discovery Log Entry 2====== 00:11:39.474 trtype: tcp 00:11:39.474 adrfam: ipv4 00:11:39.474 subtype: nvme subsystem 00:11:39.474 treq: not required 00:11:39.474 portid: 0 00:11:39.474 trsvcid: 4420 00:11:39.474 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:39.474 traddr: 10.0.0.2 00:11:39.474 eflags: none 00:11:39.474 sectype: none 00:11:39.474 =====Discovery Log Entry 3====== 00:11:39.474 trtype: tcp 00:11:39.474 adrfam: ipv4 00:11:39.474 subtype: nvme subsystem 00:11:39.474 treq: not required 00:11:39.474 portid: 0 00:11:39.474 trsvcid: 4420 00:11:39.474 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:39.474 traddr: 10.0.0.2 00:11:39.474 eflags: none 00:11:39.474 sectype: none 00:11:39.474 =====Discovery Log Entry 4====== 00:11:39.474 trtype: tcp 00:11:39.474 adrfam: ipv4 00:11:39.474 subtype: nvme subsystem 00:11:39.474 treq: not required 00:11:39.474 portid: 0 00:11:39.474 trsvcid: 4420 00:11:39.474 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:39.474 traddr: 10.0.0.2 00:11:39.474 eflags: none 00:11:39.474 sectype: none 00:11:39.474 =====Discovery Log Entry 5====== 00:11:39.474 trtype: tcp 00:11:39.474 adrfam: ipv4 00:11:39.474 subtype: discovery subsystem referral 00:11:39.474 treq: not required 00:11:39.474 portid: 0 00:11:39.474 trsvcid: 4430 00:11:39.474 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:39.474 traddr: 10.0.0.2 00:11:39.474 eflags: none 00:11:39.474 sectype: none 00:11:39.474 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:39.474 Perform nvmf subsystem discovery via RPC 00:11:39.474 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:39.474 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.474 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.474 [ 00:11:39.474 { 00:11:39.474 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:39.474 "subtype": "Discovery", 00:11:39.474 "listen_addresses": [ 00:11:39.474 { 00:11:39.474 "trtype": "TCP", 00:11:39.474 "adrfam": "IPv4", 00:11:39.474 "traddr": "10.0.0.2", 00:11:39.474 "trsvcid": "4420" 00:11:39.474 } 00:11:39.474 ], 00:11:39.474 "allow_any_host": true, 00:11:39.474 "hosts": [] 00:11:39.474 }, 00:11:39.474 { 00:11:39.474 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.474 "subtype": "NVMe", 00:11:39.474 "listen_addresses": [ 00:11:39.474 { 00:11:39.474 "trtype": "TCP", 00:11:39.474 "adrfam": "IPv4", 00:11:39.474 "traddr": "10.0.0.2", 00:11:39.474 "trsvcid": "4420" 00:11:39.474 } 00:11:39.474 ], 00:11:39.474 "allow_any_host": true, 00:11:39.474 "hosts": [], 00:11:39.474 "serial_number": "SPDK00000000000001", 00:11:39.474 "model_number": "SPDK bdev Controller", 00:11:39.474 "max_namespaces": 32, 00:11:39.474 "min_cntlid": 1, 00:11:39.474 "max_cntlid": 65519, 00:11:39.474 "namespaces": [ 00:11:39.474 { 00:11:39.474 "nsid": 1, 00:11:39.474 "bdev_name": "Null1", 00:11:39.474 "name": "Null1", 00:11:39.474 "nguid": "73A27A85496F4BFB830DD240F9E6AB32", 00:11:39.474 "uuid": "73a27a85-496f-4bfb-830d-d240f9e6ab32" 00:11:39.474 } 00:11:39.474 ] 00:11:39.474 }, 00:11:39.474 { 00:11:39.474 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:39.474 "subtype": "NVMe", 00:11:39.474 "listen_addresses": [ 00:11:39.474 { 00:11:39.474 "trtype": "TCP", 00:11:39.474 "adrfam": "IPv4", 00:11:39.474 "traddr": "10.0.0.2", 00:11:39.474 "trsvcid": "4420" 00:11:39.474 } 00:11:39.474 ], 00:11:39.474 "allow_any_host": true, 00:11:39.474 "hosts": [], 00:11:39.474 "serial_number": "SPDK00000000000002", 00:11:39.474 "model_number": "SPDK bdev Controller", 00:11:39.474 "max_namespaces": 32, 00:11:39.474 "min_cntlid": 1, 00:11:39.474 "max_cntlid": 65519, 00:11:39.474 "namespaces": [ 00:11:39.474 { 00:11:39.474 "nsid": 1, 00:11:39.474 "bdev_name": "Null2", 00:11:39.474 "name": "Null2", 00:11:39.474 "nguid": "7A61DD3744B84904B9F84042C0E05628", 00:11:39.474 "uuid": "7a61dd37-44b8-4904-b9f8-4042c0e05628" 00:11:39.474 } 00:11:39.474 ] 00:11:39.474 }, 00:11:39.474 { 00:11:39.474 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:39.474 "subtype": "NVMe", 00:11:39.474 "listen_addresses": [ 00:11:39.474 { 00:11:39.474 "trtype": "TCP", 00:11:39.474 "adrfam": "IPv4", 00:11:39.474 "traddr": "10.0.0.2", 00:11:39.474 "trsvcid": "4420" 00:11:39.474 } 00:11:39.474 ], 00:11:39.474 "allow_any_host": true, 00:11:39.474 "hosts": [], 00:11:39.474 "serial_number": "SPDK00000000000003", 00:11:39.474 "model_number": "SPDK bdev Controller", 00:11:39.474 "max_namespaces": 32, 00:11:39.474 "min_cntlid": 1, 00:11:39.474 "max_cntlid": 65519, 00:11:39.474 "namespaces": [ 00:11:39.474 { 00:11:39.474 "nsid": 1, 00:11:39.474 "bdev_name": "Null3", 00:11:39.474 "name": "Null3", 00:11:39.474 "nguid": "635237DCF0D04EEC9D29185FF90B9958", 00:11:39.474 "uuid": "635237dc-f0d0-4eec-9d29-185ff90b9958" 00:11:39.474 } 00:11:39.474 ] 00:11:39.474 }, 00:11:39.474 { 00:11:39.474 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:39.474 "subtype": "NVMe", 00:11:39.474 "listen_addresses": [ 00:11:39.474 { 00:11:39.474 "trtype": "TCP", 00:11:39.474 "adrfam": "IPv4", 00:11:39.474 "traddr": "10.0.0.2", 00:11:39.474 "trsvcid": "4420" 00:11:39.474 } 00:11:39.474 ], 00:11:39.474 "allow_any_host": true, 00:11:39.474 "hosts": [], 00:11:39.474 "serial_number": "SPDK00000000000004", 00:11:39.475 "model_number": "SPDK bdev Controller", 00:11:39.475 "max_namespaces": 32, 00:11:39.475 "min_cntlid": 1, 00:11:39.475 "max_cntlid": 65519, 00:11:39.475 "namespaces": [ 00:11:39.475 { 00:11:39.475 "nsid": 1, 00:11:39.475 "bdev_name": "Null4", 00:11:39.475 "name": "Null4", 00:11:39.475 "nguid": "FD9F9843F38A404380D3F3B475F05ADE", 00:11:39.475 "uuid": "fd9f9843-f38a-4043-80d3-f3b475f05ade" 00:11:39.475 } 00:11:39.475 ] 00:11:39.475 } 00:11:39.475 ] 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.475 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.735 rmmod nvme_tcp 00:11:39.735 rmmod nvme_fabrics 00:11:39.735 rmmod nvme_keyring 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2041976 ']' 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2041976 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2041976 ']' 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2041976 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2041976 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2041976' 00:11:39.735 killing process with pid 2041976 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2041976 00:11:39.735 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2041976 00:11:39.996 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:39.996 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:39.996 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:39.996 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:39.996 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:39.996 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:39.996 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:39.996 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.996 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:39.996 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.996 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.996 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.932 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.191 00:11:42.191 real 0m11.745s 00:11:42.191 user 0m9.020s 00:11:42.191 sys 0m6.115s 00:11:42.191 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.191 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.191 ************************************ 00:11:42.191 END TEST nvmf_target_discovery 00:11:42.191 ************************************ 00:11:42.191 13:19:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:42.191 13:19:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.191 13:19:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.191 13:19:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.191 ************************************ 00:11:42.191 START TEST nvmf_referrals 00:11:42.191 ************************************ 00:11:42.191 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:42.191 * Looking for test storage... 00:11:42.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.191 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:42.191 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:42.191 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:42.509 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:42.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.510 --rc genhtml_branch_coverage=1 00:11:42.510 --rc genhtml_function_coverage=1 00:11:42.510 --rc genhtml_legend=1 00:11:42.510 --rc geninfo_all_blocks=1 00:11:42.510 --rc geninfo_unexecuted_blocks=1 00:11:42.510 00:11:42.510 ' 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:42.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.510 --rc genhtml_branch_coverage=1 00:11:42.510 --rc genhtml_function_coverage=1 00:11:42.510 --rc genhtml_legend=1 00:11:42.510 --rc geninfo_all_blocks=1 00:11:42.510 --rc geninfo_unexecuted_blocks=1 00:11:42.510 00:11:42.510 ' 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:42.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.510 --rc genhtml_branch_coverage=1 00:11:42.510 --rc genhtml_function_coverage=1 00:11:42.510 --rc genhtml_legend=1 00:11:42.510 --rc geninfo_all_blocks=1 00:11:42.510 --rc geninfo_unexecuted_blocks=1 00:11:42.510 00:11:42.510 ' 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:42.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.510 --rc genhtml_branch_coverage=1 00:11:42.510 --rc genhtml_function_coverage=1 00:11:42.510 --rc genhtml_legend=1 00:11:42.510 --rc geninfo_all_blocks=1 00:11:42.510 --rc geninfo_unexecuted_blocks=1 00:11:42.510 00:11:42.510 ' 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:42.510 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:42.511 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:42.511 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:42.511 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.511 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:42.511 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:42.511 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:42.511 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.511 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.511 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.511 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:42.511 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:42.511 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.511 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:50.652 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:50.652 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:50.652 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:50.652 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.652 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:50.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:11:50.653 00:11:50.653 --- 10.0.0.2 ping statistics --- 00:11:50.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.653 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:11:50.653 00:11:50.653 --- 10.0.0.1 ping statistics --- 00:11:50.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.653 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2046361 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2046361 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2046361 ']' 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.653 13:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.653 [2024-12-06 13:19:36.491413] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:11:50.653 [2024-12-06 13:19:36.491492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.653 [2024-12-06 13:19:36.590038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.653 [2024-12-06 13:19:36.642920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.653 [2024-12-06 13:19:36.642972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.653 [2024-12-06 13:19:36.642980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.653 [2024-12-06 13:19:36.642988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.653 [2024-12-06 13:19:36.642994] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.653 [2024-12-06 13:19:36.645241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.653 [2024-12-06 13:19:36.645403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.653 [2024-12-06 13:19:36.645566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.653 [2024-12-06 13:19:36.645567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.914 [2024-12-06 13:19:37.375819] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.914 [2024-12-06 13:19:37.402707] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:50.914 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:51.175 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:51.436 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:51.436 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:51.436 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:51.436 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.436 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.436 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.436 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:51.436 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.436 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:51.436 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:51.697 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:51.697 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:51.697 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:51.697 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:51.697 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:51.697 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.697 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:51.958 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:52.219 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:52.219 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:52.219 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:52.219 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:52.219 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:52.219 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.219 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:52.481 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:52.481 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:52.481 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:52.481 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:52.482 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.482 13:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:52.482 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.743 rmmod nvme_tcp 00:11:52.743 rmmod nvme_fabrics 00:11:52.743 rmmod nvme_keyring 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2046361 ']' 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2046361 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2046361 ']' 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2046361 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.743 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2046361 00:11:53.003 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.003 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.003 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2046361' 00:11:53.003 killing process with pid 2046361 00:11:53.003 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2046361 00:11:53.003 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2046361 00:11:53.003 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:53.003 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:53.003 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:53.003 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:53.003 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:53.003 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:53.003 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:53.003 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.004 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:53.004 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.004 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.004 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.552 00:11:55.552 real 0m12.951s 00:11:55.552 user 0m14.615s 00:11:55.552 sys 0m6.391s 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.552 ************************************ 00:11:55.552 END TEST nvmf_referrals 00:11:55.552 ************************************ 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.552 ************************************ 00:11:55.552 START TEST nvmf_connect_disconnect 00:11:55.552 ************************************ 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:55.552 * Looking for test storage... 00:11:55.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.552 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:55.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.553 --rc genhtml_branch_coverage=1 00:11:55.553 --rc genhtml_function_coverage=1 00:11:55.553 --rc genhtml_legend=1 00:11:55.553 --rc geninfo_all_blocks=1 00:11:55.553 --rc geninfo_unexecuted_blocks=1 00:11:55.553 00:11:55.553 ' 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:55.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.553 --rc genhtml_branch_coverage=1 00:11:55.553 --rc genhtml_function_coverage=1 00:11:55.553 --rc genhtml_legend=1 00:11:55.553 --rc geninfo_all_blocks=1 00:11:55.553 --rc geninfo_unexecuted_blocks=1 00:11:55.553 00:11:55.553 ' 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:55.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.553 --rc genhtml_branch_coverage=1 00:11:55.553 --rc genhtml_function_coverage=1 00:11:55.553 --rc genhtml_legend=1 00:11:55.553 --rc geninfo_all_blocks=1 00:11:55.553 --rc geninfo_unexecuted_blocks=1 00:11:55.553 00:11:55.553 ' 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:55.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.553 --rc genhtml_branch_coverage=1 00:11:55.553 --rc genhtml_function_coverage=1 00:11:55.553 --rc genhtml_legend=1 00:11:55.553 --rc geninfo_all_blocks=1 00:11:55.553 --rc geninfo_unexecuted_blocks=1 00:11:55.553 00:11:55.553 ' 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.553 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:55.554 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:55.554 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.554 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.554 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.554 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.554 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.554 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.554 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.554 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:55.554 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:55.554 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.554 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.698 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:03.699 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:03.699 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:03.699 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:03.699 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.699 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:03.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:12:03.700 00:12:03.700 --- 10.0.0.2 ping statistics --- 00:12:03.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.700 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:12:03.700 00:12:03.700 --- 10.0.0.1 ping statistics --- 00:12:03.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.700 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2051373 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2051373 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2051373 ']' 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.700 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.700 [2024-12-06 13:19:49.502564] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:03.700 [2024-12-06 13:19:49.502630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.700 [2024-12-06 13:19:49.604466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.700 [2024-12-06 13:19:49.658005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.700 [2024-12-06 13:19:49.658062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.700 [2024-12-06 13:19:49.658071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.700 [2024-12-06 13:19:49.658079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.700 [2024-12-06 13:19:49.658085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.700 [2024-12-06 13:19:49.660151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.700 [2024-12-06 13:19:49.660316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.700 [2024-12-06 13:19:49.660501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.700 [2024-12-06 13:19:49.660556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.700 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.700 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:03.700 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:03.700 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:03.700 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.961 [2024-12-06 13:19:50.386413] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.961 [2024-12-06 13:19:50.464906] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:03.961 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:08.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.388 rmmod nvme_tcp 00:12:22.388 rmmod nvme_fabrics 00:12:22.388 rmmod nvme_keyring 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2051373 ']' 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2051373 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2051373 ']' 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2051373 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2051373 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2051373' 00:12:22.388 killing process with pid 2051373 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2051373 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2051373 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.388 13:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.306 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.306 00:12:24.306 real 0m29.203s 00:12:24.306 user 1m18.519s 00:12:24.306 sys 0m6.911s 00:12:24.306 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.306 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.306 ************************************ 00:12:24.306 END TEST nvmf_connect_disconnect 00:12:24.306 ************************************ 00:12:24.306 13:20:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:24.306 13:20:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:24.306 13:20:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.306 13:20:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.566 ************************************ 00:12:24.566 START TEST nvmf_multitarget 00:12:24.566 ************************************ 00:12:24.566 13:20:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:24.566 * Looking for test storage... 00:12:24.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.566 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:24.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.567 --rc genhtml_branch_coverage=1 00:12:24.567 --rc genhtml_function_coverage=1 00:12:24.567 --rc genhtml_legend=1 00:12:24.567 --rc geninfo_all_blocks=1 00:12:24.567 --rc geninfo_unexecuted_blocks=1 00:12:24.567 00:12:24.567 ' 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:24.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.567 --rc genhtml_branch_coverage=1 00:12:24.567 --rc genhtml_function_coverage=1 00:12:24.567 --rc genhtml_legend=1 00:12:24.567 --rc geninfo_all_blocks=1 00:12:24.567 --rc geninfo_unexecuted_blocks=1 00:12:24.567 00:12:24.567 ' 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:24.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.567 --rc genhtml_branch_coverage=1 00:12:24.567 --rc genhtml_function_coverage=1 00:12:24.567 --rc genhtml_legend=1 00:12:24.567 --rc geninfo_all_blocks=1 00:12:24.567 --rc geninfo_unexecuted_blocks=1 00:12:24.567 00:12:24.567 ' 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:24.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.567 --rc genhtml_branch_coverage=1 00:12:24.567 --rc genhtml_function_coverage=1 00:12:24.567 --rc genhtml_legend=1 00:12:24.567 --rc geninfo_all_blocks=1 00:12:24.567 --rc geninfo_unexecuted_blocks=1 00:12:24.567 00:12:24.567 ' 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.567 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:24.828 13:20:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:32.965 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:32.965 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:32.965 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.965 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:32.966 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:32.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:12:32.966 00:12:32.966 --- 10.0.0.2 ping statistics --- 00:12:32.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.966 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:12:32.966 00:12:32.966 --- 10.0.0.1 ping statistics --- 00:12:32.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.966 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2059289 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2059289 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2059289 ']' 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.966 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:32.966 [2024-12-06 13:20:18.841316] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:32.966 [2024-12-06 13:20:18.841381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.966 [2024-12-06 13:20:18.945107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.966 [2024-12-06 13:20:19.001401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.966 [2024-12-06 13:20:19.001449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.966 [2024-12-06 13:20:19.001466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.966 [2024-12-06 13:20:19.001473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.966 [2024-12-06 13:20:19.001480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.966 [2024-12-06 13:20:19.003531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.966 [2024-12-06 13:20:19.003756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.966 [2024-12-06 13:20:19.003758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.966 [2024-12-06 13:20:19.003593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.227 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.227 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:33.227 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:33.227 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:33.227 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:33.227 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.227 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:33.227 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:33.227 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:33.227 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:33.227 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:33.489 "nvmf_tgt_1" 00:12:33.489 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:33.489 "nvmf_tgt_2" 00:12:33.489 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:33.489 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:33.751 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:33.751 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:33.751 true 00:12:33.751 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:33.751 true 00:12:33.751 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:33.751 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.013 rmmod nvme_tcp 00:12:34.013 rmmod nvme_fabrics 00:12:34.013 rmmod nvme_keyring 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2059289 ']' 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2059289 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2059289 ']' 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2059289 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2059289 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2059289' 00:12:34.013 killing process with pid 2059289 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2059289 00:12:34.013 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2059289 00:12:34.304 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:34.304 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:34.304 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:34.304 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:34.304 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:34.304 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:34.304 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:34.304 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.304 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:34.304 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.304 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.304 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.851 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:36.851 00:12:36.851 real 0m11.908s 00:12:36.851 user 0m10.302s 00:12:36.851 sys 0m6.194s 00:12:36.851 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.851 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:36.851 ************************************ 00:12:36.851 END TEST nvmf_multitarget 00:12:36.851 ************************************ 00:12:36.851 13:20:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:36.851 13:20:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:36.851 13:20:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.851 13:20:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.851 ************************************ 00:12:36.851 START TEST nvmf_rpc 00:12:36.851 ************************************ 00:12:36.851 13:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:36.851 * Looking for test storage... 00:12:36.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:36.851 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:36.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.852 --rc genhtml_branch_coverage=1 00:12:36.852 --rc genhtml_function_coverage=1 00:12:36.852 --rc genhtml_legend=1 00:12:36.852 --rc geninfo_all_blocks=1 00:12:36.852 --rc geninfo_unexecuted_blocks=1 00:12:36.852 00:12:36.852 ' 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:36.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.852 --rc genhtml_branch_coverage=1 00:12:36.852 --rc genhtml_function_coverage=1 00:12:36.852 --rc genhtml_legend=1 00:12:36.852 --rc geninfo_all_blocks=1 00:12:36.852 --rc geninfo_unexecuted_blocks=1 00:12:36.852 00:12:36.852 ' 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:36.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.852 --rc genhtml_branch_coverage=1 00:12:36.852 --rc genhtml_function_coverage=1 00:12:36.852 --rc genhtml_legend=1 00:12:36.852 --rc geninfo_all_blocks=1 00:12:36.852 --rc geninfo_unexecuted_blocks=1 00:12:36.852 00:12:36.852 ' 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:36.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.852 --rc genhtml_branch_coverage=1 00:12:36.852 --rc genhtml_function_coverage=1 00:12:36.852 --rc genhtml_legend=1 00:12:36.852 --rc geninfo_all_blocks=1 00:12:36.852 --rc geninfo_unexecuted_blocks=1 00:12:36.852 00:12:36.852 ' 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.852 13:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.999 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:45.000 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:45.000 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:45.000 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:45.000 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:12:45.000 00:12:45.000 --- 10.0.0.2 ping statistics --- 00:12:45.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.000 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:12:45.000 00:12:45.000 --- 10.0.0.1 ping statistics --- 00:12:45.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.000 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2063967 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2063967 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2063967 ']' 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.000 13:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.000 [2024-12-06 13:20:30.797349] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:12:45.000 [2024-12-06 13:20:30.797415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.000 [2024-12-06 13:20:30.875390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.000 [2024-12-06 13:20:30.924085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.000 [2024-12-06 13:20:30.924138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.000 [2024-12-06 13:20:30.924145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.000 [2024-12-06 13:20:30.924151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.000 [2024-12-06 13:20:30.924156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.000 [2024-12-06 13:20:30.929486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.000 [2024-12-06 13:20:30.929566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.000 [2024-12-06 13:20:30.929889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.000 [2024-12-06 13:20:30.929891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.000 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.000 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:45.001 "tick_rate": 2400000000, 00:12:45.001 "poll_groups": [ 00:12:45.001 { 00:12:45.001 "name": "nvmf_tgt_poll_group_000", 00:12:45.001 "admin_qpairs": 0, 00:12:45.001 "io_qpairs": 0, 00:12:45.001 "current_admin_qpairs": 0, 00:12:45.001 "current_io_qpairs": 0, 00:12:45.001 "pending_bdev_io": 0, 00:12:45.001 "completed_nvme_io": 0, 00:12:45.001 "transports": [] 00:12:45.001 }, 00:12:45.001 { 00:12:45.001 "name": "nvmf_tgt_poll_group_001", 00:12:45.001 "admin_qpairs": 0, 00:12:45.001 "io_qpairs": 0, 00:12:45.001 "current_admin_qpairs": 0, 00:12:45.001 "current_io_qpairs": 0, 00:12:45.001 "pending_bdev_io": 0, 00:12:45.001 "completed_nvme_io": 0, 00:12:45.001 "transports": [] 00:12:45.001 }, 00:12:45.001 { 00:12:45.001 "name": "nvmf_tgt_poll_group_002", 00:12:45.001 "admin_qpairs": 0, 00:12:45.001 "io_qpairs": 0, 00:12:45.001 "current_admin_qpairs": 0, 00:12:45.001 "current_io_qpairs": 0, 00:12:45.001 "pending_bdev_io": 0, 00:12:45.001 "completed_nvme_io": 0, 00:12:45.001 "transports": [] 00:12:45.001 }, 00:12:45.001 { 00:12:45.001 "name": "nvmf_tgt_poll_group_003", 00:12:45.001 "admin_qpairs": 0, 00:12:45.001 "io_qpairs": 0, 00:12:45.001 "current_admin_qpairs": 0, 00:12:45.001 "current_io_qpairs": 0, 00:12:45.001 "pending_bdev_io": 0, 00:12:45.001 "completed_nvme_io": 0, 00:12:45.001 "transports": [] 00:12:45.001 } 00:12:45.001 ] 00:12:45.001 }' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.001 [2024-12-06 13:20:31.214285] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:45.001 "tick_rate": 2400000000, 00:12:45.001 "poll_groups": [ 00:12:45.001 { 00:12:45.001 "name": "nvmf_tgt_poll_group_000", 00:12:45.001 "admin_qpairs": 0, 00:12:45.001 "io_qpairs": 0, 00:12:45.001 "current_admin_qpairs": 0, 00:12:45.001 "current_io_qpairs": 0, 00:12:45.001 "pending_bdev_io": 0, 00:12:45.001 "completed_nvme_io": 0, 00:12:45.001 "transports": [ 00:12:45.001 { 00:12:45.001 "trtype": "TCP" 00:12:45.001 } 00:12:45.001 ] 00:12:45.001 }, 00:12:45.001 { 00:12:45.001 "name": "nvmf_tgt_poll_group_001", 00:12:45.001 "admin_qpairs": 0, 00:12:45.001 "io_qpairs": 0, 00:12:45.001 "current_admin_qpairs": 0, 00:12:45.001 "current_io_qpairs": 0, 00:12:45.001 "pending_bdev_io": 0, 00:12:45.001 "completed_nvme_io": 0, 00:12:45.001 "transports": [ 00:12:45.001 { 00:12:45.001 "trtype": "TCP" 00:12:45.001 } 00:12:45.001 ] 00:12:45.001 }, 00:12:45.001 { 00:12:45.001 "name": "nvmf_tgt_poll_group_002", 00:12:45.001 "admin_qpairs": 0, 00:12:45.001 "io_qpairs": 0, 00:12:45.001 "current_admin_qpairs": 0, 00:12:45.001 "current_io_qpairs": 0, 00:12:45.001 "pending_bdev_io": 0, 00:12:45.001 "completed_nvme_io": 0, 00:12:45.001 "transports": [ 00:12:45.001 { 00:12:45.001 "trtype": "TCP" 00:12:45.001 } 00:12:45.001 ] 00:12:45.001 }, 00:12:45.001 { 00:12:45.001 "name": "nvmf_tgt_poll_group_003", 00:12:45.001 "admin_qpairs": 0, 00:12:45.001 "io_qpairs": 0, 00:12:45.001 "current_admin_qpairs": 0, 00:12:45.001 "current_io_qpairs": 0, 00:12:45.001 "pending_bdev_io": 0, 00:12:45.001 "completed_nvme_io": 0, 00:12:45.001 "transports": [ 00:12:45.001 { 00:12:45.001 "trtype": "TCP" 00:12:45.001 } 00:12:45.001 ] 00:12:45.001 } 00:12:45.001 ] 00:12:45.001 }' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.001 Malloc1 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.001 [2024-12-06 13:20:31.427069] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.001 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:45.002 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:45.002 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:45.002 [2024-12-06 13:20:31.464170] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:45.002 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:45.002 could not add new controller: failed to write to nvme-fabrics device 00:12:45.002 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:45.002 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:45.002 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:45.002 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:45.002 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:45.002 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.002 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.002 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.002 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.389 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.389 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:46.389 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.389 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:46.389 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:48.300 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:48.300 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:48.300 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.561 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:48.561 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.561 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:48.561 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.561 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.562 [2024-12-06 13:20:35.139332] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:48.562 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:48.562 could not add new controller: failed to write to nvme-fabrics device 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.562 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.945 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.945 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:49.945 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.945 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:49.945 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:52.496 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:52.496 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:52.496 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.496 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:52.496 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.496 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:52.496 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.497 [2024-12-06 13:20:38.784091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.497 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.498 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.498 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.498 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.498 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.498 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.498 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.498 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.498 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.880 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.880 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:53.880 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.880 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:53.880 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:55.792 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:55.792 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:55.792 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.792 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:55.792 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.792 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:55.792 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.792 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.792 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:55.792 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:55.792 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.792 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:55.792 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.052 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.053 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.053 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.053 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.053 [2024-12-06 13:20:42.493121] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.053 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.053 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.053 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.053 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.053 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.053 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.053 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.053 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.053 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.053 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.438 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.438 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:57.438 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.438 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:57.438 13:20:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.980 [2024-12-06 13:20:46.201976] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.980 13:20:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.364 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.364 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:01.364 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.364 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:01.364 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.276 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.277 [2024-12-06 13:20:49.913390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.277 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.539 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.539 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.924 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.924 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:04.924 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.924 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:04.924 13:20:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:06.836 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:06.836 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:06.836 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.836 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:06.836 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.836 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:06.836 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.097 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.097 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:07.097 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:07.097 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.097 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:07.097 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.097 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:07.097 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.097 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.097 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.097 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.097 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.098 [2024-12-06 13:20:53.580179] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.098 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.484 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.484 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:08.484 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.484 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:08.484 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:11.021 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:11.021 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:11.021 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.021 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 [2024-12-06 13:20:57.362817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 [2024-12-06 13:20:57.430973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 [2024-12-06 13:20:57.499171] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.022 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.022 [2024-12-06 13:20:57.571412] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.023 [2024-12-06 13:20:57.639627] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.023 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.283 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.283 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.283 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.283 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.283 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.283 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:11.283 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.283 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.283 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.283 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:11.283 "tick_rate": 2400000000, 00:13:11.284 "poll_groups": [ 00:13:11.284 { 00:13:11.284 "name": "nvmf_tgt_poll_group_000", 00:13:11.284 "admin_qpairs": 0, 00:13:11.284 "io_qpairs": 224, 00:13:11.284 "current_admin_qpairs": 0, 00:13:11.284 "current_io_qpairs": 0, 00:13:11.284 "pending_bdev_io": 0, 00:13:11.284 "completed_nvme_io": 224, 00:13:11.284 "transports": [ 00:13:11.284 { 00:13:11.284 "trtype": "TCP" 00:13:11.284 } 00:13:11.284 ] 00:13:11.284 }, 00:13:11.284 { 00:13:11.284 "name": "nvmf_tgt_poll_group_001", 00:13:11.284 "admin_qpairs": 1, 00:13:11.284 "io_qpairs": 223, 00:13:11.284 "current_admin_qpairs": 0, 00:13:11.284 "current_io_qpairs": 0, 00:13:11.284 "pending_bdev_io": 0, 00:13:11.284 "completed_nvme_io": 272, 00:13:11.284 "transports": [ 00:13:11.284 { 00:13:11.284 "trtype": "TCP" 00:13:11.284 } 00:13:11.284 ] 00:13:11.284 }, 00:13:11.284 { 00:13:11.284 "name": "nvmf_tgt_poll_group_002", 00:13:11.284 "admin_qpairs": 6, 00:13:11.284 "io_qpairs": 218, 00:13:11.284 "current_admin_qpairs": 0, 00:13:11.284 "current_io_qpairs": 0, 00:13:11.284 "pending_bdev_io": 0, 00:13:11.284 "completed_nvme_io": 272, 00:13:11.284 "transports": [ 00:13:11.284 { 00:13:11.284 "trtype": "TCP" 00:13:11.284 } 00:13:11.284 ] 00:13:11.284 }, 00:13:11.284 { 00:13:11.284 "name": "nvmf_tgt_poll_group_003", 00:13:11.284 "admin_qpairs": 0, 00:13:11.284 "io_qpairs": 224, 00:13:11.284 "current_admin_qpairs": 0, 00:13:11.284 "current_io_qpairs": 0, 00:13:11.284 "pending_bdev_io": 0, 00:13:11.284 "completed_nvme_io": 471, 00:13:11.284 "transports": [ 00:13:11.284 { 00:13:11.284 "trtype": "TCP" 00:13:11.284 } 00:13:11.284 ] 00:13:11.284 } 00:13:11.284 ] 00:13:11.284 }' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.284 rmmod nvme_tcp 00:13:11.284 rmmod nvme_fabrics 00:13:11.284 rmmod nvme_keyring 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2063967 ']' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2063967 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2063967 ']' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2063967 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.284 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2063967 00:13:11.544 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.544 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.544 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2063967' 00:13:11.544 killing process with pid 2063967 00:13:11.544 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2063967 00:13:11.544 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2063967 00:13:11.544 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:11.544 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:11.544 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:11.544 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:11.544 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:11.544 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:11.544 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:11.544 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.544 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:11.544 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.544 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.544 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:14.080 00:13:14.080 real 0m37.155s 00:13:14.080 user 1m50.448s 00:13:14.080 sys 0m7.498s 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.080 ************************************ 00:13:14.080 END TEST nvmf_rpc 00:13:14.080 ************************************ 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.080 ************************************ 00:13:14.080 START TEST nvmf_invalid 00:13:14.080 ************************************ 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:14.080 * Looking for test storage... 00:13:14.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:14.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.080 --rc genhtml_branch_coverage=1 00:13:14.080 --rc genhtml_function_coverage=1 00:13:14.080 --rc genhtml_legend=1 00:13:14.080 --rc geninfo_all_blocks=1 00:13:14.080 --rc geninfo_unexecuted_blocks=1 00:13:14.080 00:13:14.080 ' 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:14.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.080 --rc genhtml_branch_coverage=1 00:13:14.080 --rc genhtml_function_coverage=1 00:13:14.080 --rc genhtml_legend=1 00:13:14.080 --rc geninfo_all_blocks=1 00:13:14.080 --rc geninfo_unexecuted_blocks=1 00:13:14.080 00:13:14.080 ' 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:14.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.080 --rc genhtml_branch_coverage=1 00:13:14.080 --rc genhtml_function_coverage=1 00:13:14.080 --rc genhtml_legend=1 00:13:14.080 --rc geninfo_all_blocks=1 00:13:14.080 --rc geninfo_unexecuted_blocks=1 00:13:14.080 00:13:14.080 ' 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:14.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.080 --rc genhtml_branch_coverage=1 00:13:14.080 --rc genhtml_function_coverage=1 00:13:14.080 --rc genhtml_legend=1 00:13:14.080 --rc geninfo_all_blocks=1 00:13:14.080 --rc geninfo_unexecuted_blocks=1 00:13:14.080 00:13:14.080 ' 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.080 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:14.081 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:22.216 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:22.217 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:22.217 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:22.217 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:22.217 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:22.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:13:22.217 00:13:22.217 --- 10.0.0.2 ping statistics --- 00:13:22.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.217 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:13:22.217 00:13:22.217 --- 10.0.0.1 ping statistics --- 00:13:22.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.217 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2073604 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2073604 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2073604 ']' 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.217 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:22.217 [2024-12-06 13:21:08.033988] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:22.218 [2024-12-06 13:21:08.034054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.218 [2024-12-06 13:21:08.137086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.218 [2024-12-06 13:21:08.190788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.218 [2024-12-06 13:21:08.190842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.218 [2024-12-06 13:21:08.190851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.218 [2024-12-06 13:21:08.190859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.218 [2024-12-06 13:21:08.190866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.218 [2024-12-06 13:21:08.193253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.218 [2024-12-06 13:21:08.193420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.218 [2024-12-06 13:21:08.193574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.218 [2024-12-06 13:21:08.193741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.218 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.218 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:22.218 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:22.218 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:22.218 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:22.479 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.479 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:22.479 13:21:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22466 00:13:22.479 [2024-12-06 13:21:09.080493] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:22.479 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:22.479 { 00:13:22.479 "nqn": "nqn.2016-06.io.spdk:cnode22466", 00:13:22.479 "tgt_name": "foobar", 00:13:22.479 "method": "nvmf_create_subsystem", 00:13:22.479 "req_id": 1 00:13:22.479 } 00:13:22.479 Got JSON-RPC error response 00:13:22.479 response: 00:13:22.479 { 00:13:22.479 "code": -32603, 00:13:22.479 "message": "Unable to find target foobar" 00:13:22.479 }' 00:13:22.479 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:22.479 { 00:13:22.479 "nqn": "nqn.2016-06.io.spdk:cnode22466", 00:13:22.479 "tgt_name": "foobar", 00:13:22.479 "method": "nvmf_create_subsystem", 00:13:22.480 "req_id": 1 00:13:22.480 } 00:13:22.480 Got JSON-RPC error response 00:13:22.480 response: 00:13:22.480 { 00:13:22.480 "code": -32603, 00:13:22.480 "message": "Unable to find target foobar" 00:13:22.480 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:22.480 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:22.480 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10393 00:13:22.741 [2024-12-06 13:21:09.289334] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10393: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:22.741 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:22.741 { 00:13:22.741 "nqn": "nqn.2016-06.io.spdk:cnode10393", 00:13:22.741 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:22.741 "method": "nvmf_create_subsystem", 00:13:22.741 "req_id": 1 00:13:22.741 } 00:13:22.741 Got JSON-RPC error response 00:13:22.741 response: 00:13:22.741 { 00:13:22.741 "code": -32602, 00:13:22.741 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:22.741 }' 00:13:22.741 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:22.741 { 00:13:22.741 "nqn": "nqn.2016-06.io.spdk:cnode10393", 00:13:22.741 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:22.741 "method": "nvmf_create_subsystem", 00:13:22.741 "req_id": 1 00:13:22.741 } 00:13:22.741 Got JSON-RPC error response 00:13:22.741 response: 00:13:22.741 { 00:13:22.741 "code": -32602, 00:13:22.741 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:22.741 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:22.741 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:22.741 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2973 00:13:23.003 [2024-12-06 13:21:09.494054] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2973: invalid model number 'SPDK_Controller' 00:13:23.003 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:23.003 { 00:13:23.003 "nqn": "nqn.2016-06.io.spdk:cnode2973", 00:13:23.003 "model_number": "SPDK_Controller\u001f", 00:13:23.003 "method": "nvmf_create_subsystem", 00:13:23.003 "req_id": 1 00:13:23.003 } 00:13:23.003 Got JSON-RPC error response 00:13:23.003 response: 00:13:23.003 { 00:13:23.003 "code": -32602, 00:13:23.003 "message": "Invalid MN SPDK_Controller\u001f" 00:13:23.003 }' 00:13:23.003 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:23.003 { 00:13:23.003 "nqn": "nqn.2016-06.io.spdk:cnode2973", 00:13:23.003 "model_number": "SPDK_Controller\u001f", 00:13:23.003 "method": "nvmf_create_subsystem", 00:13:23.003 "req_id": 1 00:13:23.003 } 00:13:23.003 Got JSON-RPC error response 00:13:23.003 response: 00:13:23.003 { 00:13:23.003 "code": -32602, 00:13:23.003 "message": "Invalid MN SPDK_Controller\u001f" 00:13:23.003 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:23.003 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:23.003 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:23.003 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:23.003 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:23.003 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:23.003 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:23.003 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.003 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:23.003 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.004 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:23.266 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 6 == \- ]] 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '6?.ckO>/-\9>I}dZzB(D_' 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '6?.ckO>/-\9>I}dZzB(D_' nqn.2016-06.io.spdk:cnode28672 00:13:23.267 [2024-12-06 13:21:09.875518] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28672: invalid serial number '6?.ckO>/-\9>I}dZzB(D_' 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:23.267 { 00:13:23.267 "nqn": "nqn.2016-06.io.spdk:cnode28672", 00:13:23.267 "serial_number": "6?.ckO>/-\\9>I}dZzB(D_", 00:13:23.267 "method": "nvmf_create_subsystem", 00:13:23.267 "req_id": 1 00:13:23.267 } 00:13:23.267 Got JSON-RPC error response 00:13:23.267 response: 00:13:23.267 { 00:13:23.267 "code": -32602, 00:13:23.267 "message": "Invalid SN 6?.ckO>/-\\9>I}dZzB(D_" 00:13:23.267 }' 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:23.267 { 00:13:23.267 "nqn": "nqn.2016-06.io.spdk:cnode28672", 00:13:23.267 "serial_number": "6?.ckO>/-\\9>I}dZzB(D_", 00:13:23.267 "method": "nvmf_create_subsystem", 00:13:23.267 "req_id": 1 00:13:23.267 } 00:13:23.267 Got JSON-RPC error response 00:13:23.267 response: 00:13:23.267 { 00:13:23.267 "code": -32602, 00:13:23.267 "message": "Invalid SN 6?.ckO>/-\\9>I}dZzB(D_" 00:13:23.267 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.267 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.530 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.530 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.531 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:13:23.794 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '5(b14Mx}/0Xxdsj!,\j)j^u0&Dylbf /dev/null' 00:13:25.910 13:21:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.821 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:27.821 00:13:27.821 real 0m14.226s 00:13:27.821 user 0m21.393s 00:13:27.821 sys 0m6.746s 00:13:27.821 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.821 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.821 ************************************ 00:13:27.821 END TEST nvmf_invalid 00:13:27.821 ************************************ 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:28.082 ************************************ 00:13:28.082 START TEST nvmf_connect_stress 00:13:28.082 ************************************ 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:28.082 * Looking for test storage... 00:13:28.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:28.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.082 --rc genhtml_branch_coverage=1 00:13:28.082 --rc genhtml_function_coverage=1 00:13:28.082 --rc genhtml_legend=1 00:13:28.082 --rc geninfo_all_blocks=1 00:13:28.082 --rc geninfo_unexecuted_blocks=1 00:13:28.082 00:13:28.082 ' 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:28.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.082 --rc genhtml_branch_coverage=1 00:13:28.082 --rc genhtml_function_coverage=1 00:13:28.082 --rc genhtml_legend=1 00:13:28.082 --rc geninfo_all_blocks=1 00:13:28.082 --rc geninfo_unexecuted_blocks=1 00:13:28.082 00:13:28.082 ' 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:28.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.082 --rc genhtml_branch_coverage=1 00:13:28.082 --rc genhtml_function_coverage=1 00:13:28.082 --rc genhtml_legend=1 00:13:28.082 --rc geninfo_all_blocks=1 00:13:28.082 --rc geninfo_unexecuted_blocks=1 00:13:28.082 00:13:28.082 ' 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:28.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.082 --rc genhtml_branch_coverage=1 00:13:28.082 --rc genhtml_function_coverage=1 00:13:28.082 --rc genhtml_legend=1 00:13:28.082 --rc geninfo_all_blocks=1 00:13:28.082 --rc geninfo_unexecuted_blocks=1 00:13:28.082 00:13:28.082 ' 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.082 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:28.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:28.343 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:28.344 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:28.344 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:28.344 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.344 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:28.344 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:28.344 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:28.344 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.344 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.344 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.344 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:28.344 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:28.344 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:28.344 13:21:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.657 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:36.658 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:36.658 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:36.658 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:36.658 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.658 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:36.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:13:36.658 00:13:36.658 --- 10.0.0.2 ping statistics --- 00:13:36.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.658 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:13:36.658 00:13:36.658 --- 10.0.0.1 ping statistics --- 00:13:36.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.658 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2079266 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2079266 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2079266 ']' 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.658 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.659 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.659 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.659 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.659 [2024-12-06 13:21:22.310720] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:36.659 [2024-12-06 13:21:22.310787] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.659 [2024-12-06 13:21:22.396845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:36.659 [2024-12-06 13:21:22.449800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.659 [2024-12-06 13:21:22.449851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.659 [2024-12-06 13:21:22.449860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.659 [2024-12-06 13:21:22.449867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.659 [2024-12-06 13:21:22.449878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.659 [2024-12-06 13:21:22.451791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.659 [2024-12-06 13:21:22.451947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.659 [2024-12-06 13:21:22.451949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.659 [2024-12-06 13:21:23.185285] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.659 [2024-12-06 13:21:23.211166] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.659 NULL1 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2079597 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.659 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.920 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.920 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.920 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.920 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.920 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.920 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.920 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.920 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:36.920 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:36.920 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.920 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.920 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.180 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.180 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:37.180 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.180 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.180 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.440 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.440 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:37.440 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.440 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.440 13:21:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.701 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.701 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:37.701 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.701 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.701 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.271 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.271 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:38.271 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.271 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.271 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.532 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.532 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:38.532 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.532 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.532 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.790 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.790 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:38.790 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.790 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.790 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.049 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.049 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:39.049 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.049 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.049 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.308 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.308 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:39.308 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.308 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.308 13:21:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.877 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.877 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:39.877 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.877 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.877 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.137 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.137 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:40.137 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.137 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.137 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.397 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.397 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:40.397 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.397 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.397 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.657 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.657 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:40.657 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.658 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.658 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.919 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.919 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:40.919 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.919 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.919 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.492 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.492 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:41.492 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.492 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.492 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.753 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.753 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:41.753 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.753 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.753 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.013 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.013 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:42.013 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.013 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.013 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.274 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.274 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:42.274 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.274 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.274 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.535 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.535 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:42.535 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.535 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.535 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.108 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.108 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:43.108 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.108 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.108 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.370 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.370 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:43.370 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.370 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.370 13:21:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.631 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.631 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:43.631 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.631 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.631 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.891 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.891 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:43.891 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.891 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.891 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.464 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.464 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:44.464 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.464 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.464 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.725 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.725 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:44.725 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.725 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.725 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.987 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.987 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:44.987 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.987 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.987 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.248 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.248 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:45.248 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.248 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.248 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.510 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.510 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:45.510 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.510 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.510 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.081 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.081 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:46.081 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.081 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.081 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.342 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.342 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:46.342 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.342 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.342 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.603 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.603 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:46.603 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.603 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.603 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.864 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:46.864 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.864 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2079597 00:13:46.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2079597) - No such process 00:13:46.864 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2079597 00:13:46.864 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:46.864 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:46.864 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:46.865 rmmod nvme_tcp 00:13:46.865 rmmod nvme_fabrics 00:13:46.865 rmmod nvme_keyring 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2079266 ']' 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2079266 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2079266 ']' 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2079266 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.865 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2079266 00:13:47.125 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:47.125 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:47.125 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2079266' 00:13:47.125 killing process with pid 2079266 00:13:47.125 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2079266 00:13:47.125 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2079266 00:13:47.125 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:47.126 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:47.126 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:47.126 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:47.126 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:47.126 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:47.126 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:47.126 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:47.126 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:47.126 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.126 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.126 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.670 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:49.670 00:13:49.670 real 0m21.212s 00:13:49.670 user 0m42.420s 00:13:49.670 sys 0m9.154s 00:13:49.670 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.670 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.670 ************************************ 00:13:49.670 END TEST nvmf_connect_stress 00:13:49.670 ************************************ 00:13:49.670 13:21:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:49.670 13:21:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:49.670 13:21:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.670 13:21:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:49.670 ************************************ 00:13:49.670 START TEST nvmf_fused_ordering 00:13:49.670 ************************************ 00:13:49.670 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:49.670 * Looking for test storage... 00:13:49.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.670 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:49.670 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:49.670 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:49.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.670 --rc genhtml_branch_coverage=1 00:13:49.670 --rc genhtml_function_coverage=1 00:13:49.670 --rc genhtml_legend=1 00:13:49.670 --rc geninfo_all_blocks=1 00:13:49.670 --rc geninfo_unexecuted_blocks=1 00:13:49.670 00:13:49.670 ' 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:49.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.670 --rc genhtml_branch_coverage=1 00:13:49.670 --rc genhtml_function_coverage=1 00:13:49.670 --rc genhtml_legend=1 00:13:49.670 --rc geninfo_all_blocks=1 00:13:49.670 --rc geninfo_unexecuted_blocks=1 00:13:49.670 00:13:49.670 ' 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:49.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.670 --rc genhtml_branch_coverage=1 00:13:49.670 --rc genhtml_function_coverage=1 00:13:49.670 --rc genhtml_legend=1 00:13:49.670 --rc geninfo_all_blocks=1 00:13:49.670 --rc geninfo_unexecuted_blocks=1 00:13:49.670 00:13:49.670 ' 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:49.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.670 --rc genhtml_branch_coverage=1 00:13:49.670 --rc genhtml_function_coverage=1 00:13:49.670 --rc genhtml_legend=1 00:13:49.670 --rc geninfo_all_blocks=1 00:13:49.670 --rc geninfo_unexecuted_blocks=1 00:13:49.670 00:13:49.670 ' 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.670 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:49.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:49.671 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:57.807 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:57.808 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:57.808 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:57.808 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:57.808 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:57.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:13:57.808 00:13:57.808 --- 10.0.0.2 ping statistics --- 00:13:57.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.808 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:13:57.808 00:13:57.808 --- 10.0.0.1 ping statistics --- 00:13:57.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.808 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2085721 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2085721 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2085721 ']' 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.808 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:57.808 [2024-12-06 13:21:43.605606] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:57.808 [2024-12-06 13:21:43.605673] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.808 [2024-12-06 13:21:43.706559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.808 [2024-12-06 13:21:43.757288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.808 [2024-12-06 13:21:43.757340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.808 [2024-12-06 13:21:43.757349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.808 [2024-12-06 13:21:43.757356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.809 [2024-12-06 13:21:43.757362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.809 [2024-12-06 13:21:43.758167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.809 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.809 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:57.809 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:57.809 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:57.809 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.069 [2024-12-06 13:21:44.477757] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.069 [2024-12-06 13:21:44.502072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.069 NULL1 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.069 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:58.069 [2024-12-06 13:21:44.571855] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:13:58.069 [2024-12-06 13:21:44.571901] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085986 ] 00:13:58.639 Attached to nqn.2016-06.io.spdk:cnode1 00:13:58.639 Namespace ID: 1 size: 1GB 00:13:58.639 fused_ordering(0) 00:13:58.639 fused_ordering(1) 00:13:58.639 fused_ordering(2) 00:13:58.639 fused_ordering(3) 00:13:58.639 fused_ordering(4) 00:13:58.639 fused_ordering(5) 00:13:58.639 fused_ordering(6) 00:13:58.639 fused_ordering(7) 00:13:58.639 fused_ordering(8) 00:13:58.639 fused_ordering(9) 00:13:58.639 fused_ordering(10) 00:13:58.639 fused_ordering(11) 00:13:58.639 fused_ordering(12) 00:13:58.639 fused_ordering(13) 00:13:58.639 fused_ordering(14) 00:13:58.639 fused_ordering(15) 00:13:58.639 fused_ordering(16) 00:13:58.639 fused_ordering(17) 00:13:58.639 fused_ordering(18) 00:13:58.639 fused_ordering(19) 00:13:58.639 fused_ordering(20) 00:13:58.639 fused_ordering(21) 00:13:58.639 fused_ordering(22) 00:13:58.639 fused_ordering(23) 00:13:58.639 fused_ordering(24) 00:13:58.639 fused_ordering(25) 00:13:58.639 fused_ordering(26) 00:13:58.639 fused_ordering(27) 00:13:58.639 fused_ordering(28) 00:13:58.639 fused_ordering(29) 00:13:58.639 fused_ordering(30) 00:13:58.639 fused_ordering(31) 00:13:58.639 fused_ordering(32) 00:13:58.639 fused_ordering(33) 00:13:58.639 fused_ordering(34) 00:13:58.639 fused_ordering(35) 00:13:58.639 fused_ordering(36) 00:13:58.639 fused_ordering(37) 00:13:58.639 fused_ordering(38) 00:13:58.639 fused_ordering(39) 00:13:58.639 fused_ordering(40) 00:13:58.639 fused_ordering(41) 00:13:58.639 fused_ordering(42) 00:13:58.639 fused_ordering(43) 00:13:58.639 fused_ordering(44) 00:13:58.639 fused_ordering(45) 00:13:58.639 fused_ordering(46) 00:13:58.639 fused_ordering(47) 00:13:58.639 fused_ordering(48) 00:13:58.639 fused_ordering(49) 00:13:58.639 fused_ordering(50) 00:13:58.639 fused_ordering(51) 00:13:58.639 fused_ordering(52) 00:13:58.639 fused_ordering(53) 00:13:58.639 fused_ordering(54) 00:13:58.639 fused_ordering(55) 00:13:58.639 fused_ordering(56) 00:13:58.639 fused_ordering(57) 00:13:58.639 fused_ordering(58) 00:13:58.639 fused_ordering(59) 00:13:58.639 fused_ordering(60) 00:13:58.639 fused_ordering(61) 00:13:58.639 fused_ordering(62) 00:13:58.639 fused_ordering(63) 00:13:58.639 fused_ordering(64) 00:13:58.639 fused_ordering(65) 00:13:58.639 fused_ordering(66) 00:13:58.639 fused_ordering(67) 00:13:58.639 fused_ordering(68) 00:13:58.639 fused_ordering(69) 00:13:58.639 fused_ordering(70) 00:13:58.639 fused_ordering(71) 00:13:58.639 fused_ordering(72) 00:13:58.639 fused_ordering(73) 00:13:58.639 fused_ordering(74) 00:13:58.639 fused_ordering(75) 00:13:58.639 fused_ordering(76) 00:13:58.639 fused_ordering(77) 00:13:58.639 fused_ordering(78) 00:13:58.639 fused_ordering(79) 00:13:58.639 fused_ordering(80) 00:13:58.639 fused_ordering(81) 00:13:58.639 fused_ordering(82) 00:13:58.639 fused_ordering(83) 00:13:58.639 fused_ordering(84) 00:13:58.639 fused_ordering(85) 00:13:58.639 fused_ordering(86) 00:13:58.639 fused_ordering(87) 00:13:58.639 fused_ordering(88) 00:13:58.639 fused_ordering(89) 00:13:58.639 fused_ordering(90) 00:13:58.639 fused_ordering(91) 00:13:58.639 fused_ordering(92) 00:13:58.639 fused_ordering(93) 00:13:58.639 fused_ordering(94) 00:13:58.639 fused_ordering(95) 00:13:58.639 fused_ordering(96) 00:13:58.639 fused_ordering(97) 00:13:58.639 fused_ordering(98) 00:13:58.639 fused_ordering(99) 00:13:58.639 fused_ordering(100) 00:13:58.639 fused_ordering(101) 00:13:58.639 fused_ordering(102) 00:13:58.639 fused_ordering(103) 00:13:58.639 fused_ordering(104) 00:13:58.639 fused_ordering(105) 00:13:58.639 fused_ordering(106) 00:13:58.639 fused_ordering(107) 00:13:58.639 fused_ordering(108) 00:13:58.639 fused_ordering(109) 00:13:58.639 fused_ordering(110) 00:13:58.639 fused_ordering(111) 00:13:58.639 fused_ordering(112) 00:13:58.639 fused_ordering(113) 00:13:58.639 fused_ordering(114) 00:13:58.639 fused_ordering(115) 00:13:58.639 fused_ordering(116) 00:13:58.639 fused_ordering(117) 00:13:58.639 fused_ordering(118) 00:13:58.639 fused_ordering(119) 00:13:58.639 fused_ordering(120) 00:13:58.639 fused_ordering(121) 00:13:58.639 fused_ordering(122) 00:13:58.639 fused_ordering(123) 00:13:58.639 fused_ordering(124) 00:13:58.639 fused_ordering(125) 00:13:58.639 fused_ordering(126) 00:13:58.639 fused_ordering(127) 00:13:58.639 fused_ordering(128) 00:13:58.639 fused_ordering(129) 00:13:58.639 fused_ordering(130) 00:13:58.639 fused_ordering(131) 00:13:58.639 fused_ordering(132) 00:13:58.639 fused_ordering(133) 00:13:58.639 fused_ordering(134) 00:13:58.639 fused_ordering(135) 00:13:58.639 fused_ordering(136) 00:13:58.639 fused_ordering(137) 00:13:58.639 fused_ordering(138) 00:13:58.639 fused_ordering(139) 00:13:58.639 fused_ordering(140) 00:13:58.639 fused_ordering(141) 00:13:58.639 fused_ordering(142) 00:13:58.639 fused_ordering(143) 00:13:58.639 fused_ordering(144) 00:13:58.639 fused_ordering(145) 00:13:58.639 fused_ordering(146) 00:13:58.639 fused_ordering(147) 00:13:58.639 fused_ordering(148) 00:13:58.639 fused_ordering(149) 00:13:58.639 fused_ordering(150) 00:13:58.639 fused_ordering(151) 00:13:58.639 fused_ordering(152) 00:13:58.639 fused_ordering(153) 00:13:58.639 fused_ordering(154) 00:13:58.639 fused_ordering(155) 00:13:58.639 fused_ordering(156) 00:13:58.639 fused_ordering(157) 00:13:58.639 fused_ordering(158) 00:13:58.639 fused_ordering(159) 00:13:58.639 fused_ordering(160) 00:13:58.639 fused_ordering(161) 00:13:58.639 fused_ordering(162) 00:13:58.639 fused_ordering(163) 00:13:58.639 fused_ordering(164) 00:13:58.639 fused_ordering(165) 00:13:58.639 fused_ordering(166) 00:13:58.639 fused_ordering(167) 00:13:58.639 fused_ordering(168) 00:13:58.639 fused_ordering(169) 00:13:58.639 fused_ordering(170) 00:13:58.639 fused_ordering(171) 00:13:58.639 fused_ordering(172) 00:13:58.639 fused_ordering(173) 00:13:58.639 fused_ordering(174) 00:13:58.639 fused_ordering(175) 00:13:58.639 fused_ordering(176) 00:13:58.639 fused_ordering(177) 00:13:58.639 fused_ordering(178) 00:13:58.639 fused_ordering(179) 00:13:58.639 fused_ordering(180) 00:13:58.639 fused_ordering(181) 00:13:58.639 fused_ordering(182) 00:13:58.639 fused_ordering(183) 00:13:58.639 fused_ordering(184) 00:13:58.639 fused_ordering(185) 00:13:58.639 fused_ordering(186) 00:13:58.639 fused_ordering(187) 00:13:58.639 fused_ordering(188) 00:13:58.639 fused_ordering(189) 00:13:58.639 fused_ordering(190) 00:13:58.639 fused_ordering(191) 00:13:58.639 fused_ordering(192) 00:13:58.639 fused_ordering(193) 00:13:58.639 fused_ordering(194) 00:13:58.639 fused_ordering(195) 00:13:58.639 fused_ordering(196) 00:13:58.639 fused_ordering(197) 00:13:58.639 fused_ordering(198) 00:13:58.639 fused_ordering(199) 00:13:58.639 fused_ordering(200) 00:13:58.640 fused_ordering(201) 00:13:58.640 fused_ordering(202) 00:13:58.640 fused_ordering(203) 00:13:58.640 fused_ordering(204) 00:13:58.640 fused_ordering(205) 00:13:58.899 fused_ordering(206) 00:13:58.899 fused_ordering(207) 00:13:58.899 fused_ordering(208) 00:13:58.899 fused_ordering(209) 00:13:58.899 fused_ordering(210) 00:13:58.899 fused_ordering(211) 00:13:58.899 fused_ordering(212) 00:13:58.899 fused_ordering(213) 00:13:58.899 fused_ordering(214) 00:13:58.899 fused_ordering(215) 00:13:58.899 fused_ordering(216) 00:13:58.899 fused_ordering(217) 00:13:58.899 fused_ordering(218) 00:13:58.899 fused_ordering(219) 00:13:58.899 fused_ordering(220) 00:13:58.899 fused_ordering(221) 00:13:58.899 fused_ordering(222) 00:13:58.899 fused_ordering(223) 00:13:58.899 fused_ordering(224) 00:13:58.899 fused_ordering(225) 00:13:58.899 fused_ordering(226) 00:13:58.899 fused_ordering(227) 00:13:58.899 fused_ordering(228) 00:13:58.899 fused_ordering(229) 00:13:58.899 fused_ordering(230) 00:13:58.899 fused_ordering(231) 00:13:58.899 fused_ordering(232) 00:13:58.899 fused_ordering(233) 00:13:58.899 fused_ordering(234) 00:13:58.899 fused_ordering(235) 00:13:58.899 fused_ordering(236) 00:13:58.899 fused_ordering(237) 00:13:58.899 fused_ordering(238) 00:13:58.899 fused_ordering(239) 00:13:58.899 fused_ordering(240) 00:13:58.899 fused_ordering(241) 00:13:58.899 fused_ordering(242) 00:13:58.899 fused_ordering(243) 00:13:58.899 fused_ordering(244) 00:13:58.899 fused_ordering(245) 00:13:58.899 fused_ordering(246) 00:13:58.899 fused_ordering(247) 00:13:58.899 fused_ordering(248) 00:13:58.899 fused_ordering(249) 00:13:58.899 fused_ordering(250) 00:13:58.899 fused_ordering(251) 00:13:58.899 fused_ordering(252) 00:13:58.899 fused_ordering(253) 00:13:58.899 fused_ordering(254) 00:13:58.899 fused_ordering(255) 00:13:58.899 fused_ordering(256) 00:13:58.899 fused_ordering(257) 00:13:58.899 fused_ordering(258) 00:13:58.899 fused_ordering(259) 00:13:58.899 fused_ordering(260) 00:13:58.899 fused_ordering(261) 00:13:58.899 fused_ordering(262) 00:13:58.899 fused_ordering(263) 00:13:58.899 fused_ordering(264) 00:13:58.899 fused_ordering(265) 00:13:58.899 fused_ordering(266) 00:13:58.899 fused_ordering(267) 00:13:58.899 fused_ordering(268) 00:13:58.899 fused_ordering(269) 00:13:58.899 fused_ordering(270) 00:13:58.899 fused_ordering(271) 00:13:58.899 fused_ordering(272) 00:13:58.899 fused_ordering(273) 00:13:58.899 fused_ordering(274) 00:13:58.899 fused_ordering(275) 00:13:58.899 fused_ordering(276) 00:13:58.899 fused_ordering(277) 00:13:58.899 fused_ordering(278) 00:13:58.899 fused_ordering(279) 00:13:58.899 fused_ordering(280) 00:13:58.899 fused_ordering(281) 00:13:58.899 fused_ordering(282) 00:13:58.899 fused_ordering(283) 00:13:58.899 fused_ordering(284) 00:13:58.899 fused_ordering(285) 00:13:58.899 fused_ordering(286) 00:13:58.899 fused_ordering(287) 00:13:58.899 fused_ordering(288) 00:13:58.899 fused_ordering(289) 00:13:58.899 fused_ordering(290) 00:13:58.899 fused_ordering(291) 00:13:58.899 fused_ordering(292) 00:13:58.899 fused_ordering(293) 00:13:58.899 fused_ordering(294) 00:13:58.899 fused_ordering(295) 00:13:58.899 fused_ordering(296) 00:13:58.899 fused_ordering(297) 00:13:58.899 fused_ordering(298) 00:13:58.899 fused_ordering(299) 00:13:58.899 fused_ordering(300) 00:13:58.899 fused_ordering(301) 00:13:58.899 fused_ordering(302) 00:13:58.899 fused_ordering(303) 00:13:58.899 fused_ordering(304) 00:13:58.899 fused_ordering(305) 00:13:58.899 fused_ordering(306) 00:13:58.899 fused_ordering(307) 00:13:58.899 fused_ordering(308) 00:13:58.899 fused_ordering(309) 00:13:58.899 fused_ordering(310) 00:13:58.899 fused_ordering(311) 00:13:58.899 fused_ordering(312) 00:13:58.899 fused_ordering(313) 00:13:58.899 fused_ordering(314) 00:13:58.899 fused_ordering(315) 00:13:58.899 fused_ordering(316) 00:13:58.899 fused_ordering(317) 00:13:58.899 fused_ordering(318) 00:13:58.899 fused_ordering(319) 00:13:58.899 fused_ordering(320) 00:13:58.899 fused_ordering(321) 00:13:58.899 fused_ordering(322) 00:13:58.899 fused_ordering(323) 00:13:58.899 fused_ordering(324) 00:13:58.899 fused_ordering(325) 00:13:58.899 fused_ordering(326) 00:13:58.899 fused_ordering(327) 00:13:58.899 fused_ordering(328) 00:13:58.899 fused_ordering(329) 00:13:58.899 fused_ordering(330) 00:13:58.899 fused_ordering(331) 00:13:58.899 fused_ordering(332) 00:13:58.899 fused_ordering(333) 00:13:58.899 fused_ordering(334) 00:13:58.899 fused_ordering(335) 00:13:58.899 fused_ordering(336) 00:13:58.899 fused_ordering(337) 00:13:58.899 fused_ordering(338) 00:13:58.899 fused_ordering(339) 00:13:58.899 fused_ordering(340) 00:13:58.899 fused_ordering(341) 00:13:58.899 fused_ordering(342) 00:13:58.899 fused_ordering(343) 00:13:58.899 fused_ordering(344) 00:13:58.899 fused_ordering(345) 00:13:58.899 fused_ordering(346) 00:13:58.899 fused_ordering(347) 00:13:58.899 fused_ordering(348) 00:13:58.899 fused_ordering(349) 00:13:58.899 fused_ordering(350) 00:13:58.899 fused_ordering(351) 00:13:58.899 fused_ordering(352) 00:13:58.899 fused_ordering(353) 00:13:58.899 fused_ordering(354) 00:13:58.899 fused_ordering(355) 00:13:58.899 fused_ordering(356) 00:13:58.899 fused_ordering(357) 00:13:58.899 fused_ordering(358) 00:13:58.899 fused_ordering(359) 00:13:58.899 fused_ordering(360) 00:13:58.899 fused_ordering(361) 00:13:58.899 fused_ordering(362) 00:13:58.899 fused_ordering(363) 00:13:58.899 fused_ordering(364) 00:13:58.899 fused_ordering(365) 00:13:58.899 fused_ordering(366) 00:13:58.899 fused_ordering(367) 00:13:58.899 fused_ordering(368) 00:13:58.899 fused_ordering(369) 00:13:58.899 fused_ordering(370) 00:13:58.899 fused_ordering(371) 00:13:58.899 fused_ordering(372) 00:13:58.899 fused_ordering(373) 00:13:58.899 fused_ordering(374) 00:13:58.899 fused_ordering(375) 00:13:58.899 fused_ordering(376) 00:13:58.899 fused_ordering(377) 00:13:58.899 fused_ordering(378) 00:13:58.899 fused_ordering(379) 00:13:58.899 fused_ordering(380) 00:13:58.899 fused_ordering(381) 00:13:58.899 fused_ordering(382) 00:13:58.899 fused_ordering(383) 00:13:58.899 fused_ordering(384) 00:13:58.899 fused_ordering(385) 00:13:58.899 fused_ordering(386) 00:13:58.899 fused_ordering(387) 00:13:58.900 fused_ordering(388) 00:13:58.900 fused_ordering(389) 00:13:58.900 fused_ordering(390) 00:13:58.900 fused_ordering(391) 00:13:58.900 fused_ordering(392) 00:13:58.900 fused_ordering(393) 00:13:58.900 fused_ordering(394) 00:13:58.900 fused_ordering(395) 00:13:58.900 fused_ordering(396) 00:13:58.900 fused_ordering(397) 00:13:58.900 fused_ordering(398) 00:13:58.900 fused_ordering(399) 00:13:58.900 fused_ordering(400) 00:13:58.900 fused_ordering(401) 00:13:58.900 fused_ordering(402) 00:13:58.900 fused_ordering(403) 00:13:58.900 fused_ordering(404) 00:13:58.900 fused_ordering(405) 00:13:58.900 fused_ordering(406) 00:13:58.900 fused_ordering(407) 00:13:58.900 fused_ordering(408) 00:13:58.900 fused_ordering(409) 00:13:58.900 fused_ordering(410) 00:13:59.469 fused_ordering(411) 00:13:59.469 fused_ordering(412) 00:13:59.469 fused_ordering(413) 00:13:59.469 fused_ordering(414) 00:13:59.469 fused_ordering(415) 00:13:59.469 fused_ordering(416) 00:13:59.469 fused_ordering(417) 00:13:59.469 fused_ordering(418) 00:13:59.469 fused_ordering(419) 00:13:59.469 fused_ordering(420) 00:13:59.469 fused_ordering(421) 00:13:59.469 fused_ordering(422) 00:13:59.469 fused_ordering(423) 00:13:59.469 fused_ordering(424) 00:13:59.469 fused_ordering(425) 00:13:59.469 fused_ordering(426) 00:13:59.469 fused_ordering(427) 00:13:59.469 fused_ordering(428) 00:13:59.469 fused_ordering(429) 00:13:59.469 fused_ordering(430) 00:13:59.469 fused_ordering(431) 00:13:59.469 fused_ordering(432) 00:13:59.469 fused_ordering(433) 00:13:59.469 fused_ordering(434) 00:13:59.469 fused_ordering(435) 00:13:59.469 fused_ordering(436) 00:13:59.469 fused_ordering(437) 00:13:59.469 fused_ordering(438) 00:13:59.469 fused_ordering(439) 00:13:59.469 fused_ordering(440) 00:13:59.469 fused_ordering(441) 00:13:59.469 fused_ordering(442) 00:13:59.469 fused_ordering(443) 00:13:59.469 fused_ordering(444) 00:13:59.469 fused_ordering(445) 00:13:59.469 fused_ordering(446) 00:13:59.469 fused_ordering(447) 00:13:59.469 fused_ordering(448) 00:13:59.469 fused_ordering(449) 00:13:59.469 fused_ordering(450) 00:13:59.469 fused_ordering(451) 00:13:59.469 fused_ordering(452) 00:13:59.469 fused_ordering(453) 00:13:59.469 fused_ordering(454) 00:13:59.469 fused_ordering(455) 00:13:59.469 fused_ordering(456) 00:13:59.469 fused_ordering(457) 00:13:59.469 fused_ordering(458) 00:13:59.469 fused_ordering(459) 00:13:59.469 fused_ordering(460) 00:13:59.469 fused_ordering(461) 00:13:59.469 fused_ordering(462) 00:13:59.469 fused_ordering(463) 00:13:59.469 fused_ordering(464) 00:13:59.469 fused_ordering(465) 00:13:59.469 fused_ordering(466) 00:13:59.469 fused_ordering(467) 00:13:59.469 fused_ordering(468) 00:13:59.469 fused_ordering(469) 00:13:59.469 fused_ordering(470) 00:13:59.469 fused_ordering(471) 00:13:59.469 fused_ordering(472) 00:13:59.469 fused_ordering(473) 00:13:59.469 fused_ordering(474) 00:13:59.469 fused_ordering(475) 00:13:59.469 fused_ordering(476) 00:13:59.469 fused_ordering(477) 00:13:59.469 fused_ordering(478) 00:13:59.469 fused_ordering(479) 00:13:59.469 fused_ordering(480) 00:13:59.469 fused_ordering(481) 00:13:59.469 fused_ordering(482) 00:13:59.469 fused_ordering(483) 00:13:59.469 fused_ordering(484) 00:13:59.469 fused_ordering(485) 00:13:59.469 fused_ordering(486) 00:13:59.469 fused_ordering(487) 00:13:59.469 fused_ordering(488) 00:13:59.469 fused_ordering(489) 00:13:59.469 fused_ordering(490) 00:13:59.469 fused_ordering(491) 00:13:59.469 fused_ordering(492) 00:13:59.469 fused_ordering(493) 00:13:59.469 fused_ordering(494) 00:13:59.469 fused_ordering(495) 00:13:59.469 fused_ordering(496) 00:13:59.469 fused_ordering(497) 00:13:59.469 fused_ordering(498) 00:13:59.469 fused_ordering(499) 00:13:59.469 fused_ordering(500) 00:13:59.469 fused_ordering(501) 00:13:59.469 fused_ordering(502) 00:13:59.469 fused_ordering(503) 00:13:59.469 fused_ordering(504) 00:13:59.469 fused_ordering(505) 00:13:59.469 fused_ordering(506) 00:13:59.469 fused_ordering(507) 00:13:59.469 fused_ordering(508) 00:13:59.469 fused_ordering(509) 00:13:59.469 fused_ordering(510) 00:13:59.469 fused_ordering(511) 00:13:59.469 fused_ordering(512) 00:13:59.469 fused_ordering(513) 00:13:59.469 fused_ordering(514) 00:13:59.469 fused_ordering(515) 00:13:59.469 fused_ordering(516) 00:13:59.469 fused_ordering(517) 00:13:59.469 fused_ordering(518) 00:13:59.469 fused_ordering(519) 00:13:59.469 fused_ordering(520) 00:13:59.469 fused_ordering(521) 00:13:59.469 fused_ordering(522) 00:13:59.469 fused_ordering(523) 00:13:59.469 fused_ordering(524) 00:13:59.469 fused_ordering(525) 00:13:59.469 fused_ordering(526) 00:13:59.469 fused_ordering(527) 00:13:59.469 fused_ordering(528) 00:13:59.469 fused_ordering(529) 00:13:59.469 fused_ordering(530) 00:13:59.469 fused_ordering(531) 00:13:59.469 fused_ordering(532) 00:13:59.469 fused_ordering(533) 00:13:59.469 fused_ordering(534) 00:13:59.469 fused_ordering(535) 00:13:59.469 fused_ordering(536) 00:13:59.469 fused_ordering(537) 00:13:59.469 fused_ordering(538) 00:13:59.469 fused_ordering(539) 00:13:59.469 fused_ordering(540) 00:13:59.469 fused_ordering(541) 00:13:59.469 fused_ordering(542) 00:13:59.469 fused_ordering(543) 00:13:59.469 fused_ordering(544) 00:13:59.469 fused_ordering(545) 00:13:59.469 fused_ordering(546) 00:13:59.469 fused_ordering(547) 00:13:59.469 fused_ordering(548) 00:13:59.469 fused_ordering(549) 00:13:59.469 fused_ordering(550) 00:13:59.469 fused_ordering(551) 00:13:59.469 fused_ordering(552) 00:13:59.469 fused_ordering(553) 00:13:59.469 fused_ordering(554) 00:13:59.469 fused_ordering(555) 00:13:59.469 fused_ordering(556) 00:13:59.469 fused_ordering(557) 00:13:59.469 fused_ordering(558) 00:13:59.469 fused_ordering(559) 00:13:59.469 fused_ordering(560) 00:13:59.469 fused_ordering(561) 00:13:59.469 fused_ordering(562) 00:13:59.469 fused_ordering(563) 00:13:59.469 fused_ordering(564) 00:13:59.469 fused_ordering(565) 00:13:59.470 fused_ordering(566) 00:13:59.470 fused_ordering(567) 00:13:59.470 fused_ordering(568) 00:13:59.470 fused_ordering(569) 00:13:59.470 fused_ordering(570) 00:13:59.470 fused_ordering(571) 00:13:59.470 fused_ordering(572) 00:13:59.470 fused_ordering(573) 00:13:59.470 fused_ordering(574) 00:13:59.470 fused_ordering(575) 00:13:59.470 fused_ordering(576) 00:13:59.470 fused_ordering(577) 00:13:59.470 fused_ordering(578) 00:13:59.470 fused_ordering(579) 00:13:59.470 fused_ordering(580) 00:13:59.470 fused_ordering(581) 00:13:59.470 fused_ordering(582) 00:13:59.470 fused_ordering(583) 00:13:59.470 fused_ordering(584) 00:13:59.470 fused_ordering(585) 00:13:59.470 fused_ordering(586) 00:13:59.470 fused_ordering(587) 00:13:59.470 fused_ordering(588) 00:13:59.470 fused_ordering(589) 00:13:59.470 fused_ordering(590) 00:13:59.470 fused_ordering(591) 00:13:59.470 fused_ordering(592) 00:13:59.470 fused_ordering(593) 00:13:59.470 fused_ordering(594) 00:13:59.470 fused_ordering(595) 00:13:59.470 fused_ordering(596) 00:13:59.470 fused_ordering(597) 00:13:59.470 fused_ordering(598) 00:13:59.470 fused_ordering(599) 00:13:59.470 fused_ordering(600) 00:13:59.470 fused_ordering(601) 00:13:59.470 fused_ordering(602) 00:13:59.470 fused_ordering(603) 00:13:59.470 fused_ordering(604) 00:13:59.470 fused_ordering(605) 00:13:59.470 fused_ordering(606) 00:13:59.470 fused_ordering(607) 00:13:59.470 fused_ordering(608) 00:13:59.470 fused_ordering(609) 00:13:59.470 fused_ordering(610) 00:13:59.470 fused_ordering(611) 00:13:59.470 fused_ordering(612) 00:13:59.470 fused_ordering(613) 00:13:59.470 fused_ordering(614) 00:13:59.470 fused_ordering(615) 00:13:59.729 fused_ordering(616) 00:13:59.729 fused_ordering(617) 00:13:59.729 fused_ordering(618) 00:13:59.729 fused_ordering(619) 00:13:59.729 fused_ordering(620) 00:13:59.729 fused_ordering(621) 00:13:59.729 fused_ordering(622) 00:13:59.729 fused_ordering(623) 00:13:59.729 fused_ordering(624) 00:13:59.729 fused_ordering(625) 00:13:59.729 fused_ordering(626) 00:13:59.729 fused_ordering(627) 00:13:59.729 fused_ordering(628) 00:13:59.729 fused_ordering(629) 00:13:59.729 fused_ordering(630) 00:13:59.729 fused_ordering(631) 00:13:59.729 fused_ordering(632) 00:13:59.729 fused_ordering(633) 00:13:59.729 fused_ordering(634) 00:13:59.729 fused_ordering(635) 00:13:59.729 fused_ordering(636) 00:13:59.729 fused_ordering(637) 00:13:59.729 fused_ordering(638) 00:13:59.729 fused_ordering(639) 00:13:59.729 fused_ordering(640) 00:13:59.729 fused_ordering(641) 00:13:59.729 fused_ordering(642) 00:13:59.729 fused_ordering(643) 00:13:59.729 fused_ordering(644) 00:13:59.729 fused_ordering(645) 00:13:59.729 fused_ordering(646) 00:13:59.729 fused_ordering(647) 00:13:59.729 fused_ordering(648) 00:13:59.729 fused_ordering(649) 00:13:59.729 fused_ordering(650) 00:13:59.729 fused_ordering(651) 00:13:59.729 fused_ordering(652) 00:13:59.729 fused_ordering(653) 00:13:59.729 fused_ordering(654) 00:13:59.729 fused_ordering(655) 00:13:59.729 fused_ordering(656) 00:13:59.729 fused_ordering(657) 00:13:59.729 fused_ordering(658) 00:13:59.729 fused_ordering(659) 00:13:59.729 fused_ordering(660) 00:13:59.729 fused_ordering(661) 00:13:59.729 fused_ordering(662) 00:13:59.729 fused_ordering(663) 00:13:59.729 fused_ordering(664) 00:13:59.729 fused_ordering(665) 00:13:59.729 fused_ordering(666) 00:13:59.729 fused_ordering(667) 00:13:59.729 fused_ordering(668) 00:13:59.729 fused_ordering(669) 00:13:59.729 fused_ordering(670) 00:13:59.729 fused_ordering(671) 00:13:59.729 fused_ordering(672) 00:13:59.729 fused_ordering(673) 00:13:59.729 fused_ordering(674) 00:13:59.729 fused_ordering(675) 00:13:59.729 fused_ordering(676) 00:13:59.729 fused_ordering(677) 00:13:59.729 fused_ordering(678) 00:13:59.729 fused_ordering(679) 00:13:59.729 fused_ordering(680) 00:13:59.729 fused_ordering(681) 00:13:59.729 fused_ordering(682) 00:13:59.729 fused_ordering(683) 00:13:59.729 fused_ordering(684) 00:13:59.729 fused_ordering(685) 00:13:59.729 fused_ordering(686) 00:13:59.729 fused_ordering(687) 00:13:59.729 fused_ordering(688) 00:13:59.729 fused_ordering(689) 00:13:59.729 fused_ordering(690) 00:13:59.729 fused_ordering(691) 00:13:59.729 fused_ordering(692) 00:13:59.729 fused_ordering(693) 00:13:59.729 fused_ordering(694) 00:13:59.729 fused_ordering(695) 00:13:59.729 fused_ordering(696) 00:13:59.729 fused_ordering(697) 00:13:59.729 fused_ordering(698) 00:13:59.729 fused_ordering(699) 00:13:59.729 fused_ordering(700) 00:13:59.729 fused_ordering(701) 00:13:59.729 fused_ordering(702) 00:13:59.729 fused_ordering(703) 00:13:59.729 fused_ordering(704) 00:13:59.729 fused_ordering(705) 00:13:59.729 fused_ordering(706) 00:13:59.729 fused_ordering(707) 00:13:59.729 fused_ordering(708) 00:13:59.729 fused_ordering(709) 00:13:59.729 fused_ordering(710) 00:13:59.729 fused_ordering(711) 00:13:59.729 fused_ordering(712) 00:13:59.729 fused_ordering(713) 00:13:59.729 fused_ordering(714) 00:13:59.729 fused_ordering(715) 00:13:59.729 fused_ordering(716) 00:13:59.729 fused_ordering(717) 00:13:59.729 fused_ordering(718) 00:13:59.729 fused_ordering(719) 00:13:59.729 fused_ordering(720) 00:13:59.729 fused_ordering(721) 00:13:59.729 fused_ordering(722) 00:13:59.729 fused_ordering(723) 00:13:59.729 fused_ordering(724) 00:13:59.729 fused_ordering(725) 00:13:59.729 fused_ordering(726) 00:13:59.729 fused_ordering(727) 00:13:59.729 fused_ordering(728) 00:13:59.729 fused_ordering(729) 00:13:59.729 fused_ordering(730) 00:13:59.729 fused_ordering(731) 00:13:59.729 fused_ordering(732) 00:13:59.730 fused_ordering(733) 00:13:59.730 fused_ordering(734) 00:13:59.730 fused_ordering(735) 00:13:59.730 fused_ordering(736) 00:13:59.730 fused_ordering(737) 00:13:59.730 fused_ordering(738) 00:13:59.730 fused_ordering(739) 00:13:59.730 fused_ordering(740) 00:13:59.730 fused_ordering(741) 00:13:59.730 fused_ordering(742) 00:13:59.730 fused_ordering(743) 00:13:59.730 fused_ordering(744) 00:13:59.730 fused_ordering(745) 00:13:59.730 fused_ordering(746) 00:13:59.730 fused_ordering(747) 00:13:59.730 fused_ordering(748) 00:13:59.730 fused_ordering(749) 00:13:59.730 fused_ordering(750) 00:13:59.730 fused_ordering(751) 00:13:59.730 fused_ordering(752) 00:13:59.730 fused_ordering(753) 00:13:59.730 fused_ordering(754) 00:13:59.730 fused_ordering(755) 00:13:59.730 fused_ordering(756) 00:13:59.730 fused_ordering(757) 00:13:59.730 fused_ordering(758) 00:13:59.730 fused_ordering(759) 00:13:59.730 fused_ordering(760) 00:13:59.730 fused_ordering(761) 00:13:59.730 fused_ordering(762) 00:13:59.730 fused_ordering(763) 00:13:59.730 fused_ordering(764) 00:13:59.730 fused_ordering(765) 00:13:59.730 fused_ordering(766) 00:13:59.730 fused_ordering(767) 00:13:59.730 fused_ordering(768) 00:13:59.730 fused_ordering(769) 00:13:59.730 fused_ordering(770) 00:13:59.730 fused_ordering(771) 00:13:59.730 fused_ordering(772) 00:13:59.730 fused_ordering(773) 00:13:59.730 fused_ordering(774) 00:13:59.730 fused_ordering(775) 00:13:59.730 fused_ordering(776) 00:13:59.730 fused_ordering(777) 00:13:59.730 fused_ordering(778) 00:13:59.730 fused_ordering(779) 00:13:59.730 fused_ordering(780) 00:13:59.730 fused_ordering(781) 00:13:59.730 fused_ordering(782) 00:13:59.730 fused_ordering(783) 00:13:59.730 fused_ordering(784) 00:13:59.730 fused_ordering(785) 00:13:59.730 fused_ordering(786) 00:13:59.730 fused_ordering(787) 00:13:59.730 fused_ordering(788) 00:13:59.730 fused_ordering(789) 00:13:59.730 fused_ordering(790) 00:13:59.730 fused_ordering(791) 00:13:59.730 fused_ordering(792) 00:13:59.730 fused_ordering(793) 00:13:59.730 fused_ordering(794) 00:13:59.730 fused_ordering(795) 00:13:59.730 fused_ordering(796) 00:13:59.730 fused_ordering(797) 00:13:59.730 fused_ordering(798) 00:13:59.730 fused_ordering(799) 00:13:59.730 fused_ordering(800) 00:13:59.730 fused_ordering(801) 00:13:59.730 fused_ordering(802) 00:13:59.730 fused_ordering(803) 00:13:59.730 fused_ordering(804) 00:13:59.730 fused_ordering(805) 00:13:59.730 fused_ordering(806) 00:13:59.730 fused_ordering(807) 00:13:59.730 fused_ordering(808) 00:13:59.730 fused_ordering(809) 00:13:59.730 fused_ordering(810) 00:13:59.730 fused_ordering(811) 00:13:59.730 fused_ordering(812) 00:13:59.730 fused_ordering(813) 00:13:59.730 fused_ordering(814) 00:13:59.730 fused_ordering(815) 00:13:59.730 fused_ordering(816) 00:13:59.730 fused_ordering(817) 00:13:59.730 fused_ordering(818) 00:13:59.730 fused_ordering(819) 00:13:59.730 fused_ordering(820) 00:14:00.670 fused_ordering(821) 00:14:00.670 fused_ordering(822) 00:14:00.670 fused_ordering(823) 00:14:00.670 fused_ordering(824) 00:14:00.670 fused_ordering(825) 00:14:00.670 fused_ordering(826) 00:14:00.670 fused_ordering(827) 00:14:00.670 fused_ordering(828) 00:14:00.670 fused_ordering(829) 00:14:00.670 fused_ordering(830) 00:14:00.670 fused_ordering(831) 00:14:00.670 fused_ordering(832) 00:14:00.670 fused_ordering(833) 00:14:00.670 fused_ordering(834) 00:14:00.670 fused_ordering(835) 00:14:00.670 fused_ordering(836) 00:14:00.670 fused_ordering(837) 00:14:00.670 fused_ordering(838) 00:14:00.670 fused_ordering(839) 00:14:00.670 fused_ordering(840) 00:14:00.670 fused_ordering(841) 00:14:00.670 fused_ordering(842) 00:14:00.670 fused_ordering(843) 00:14:00.670 fused_ordering(844) 00:14:00.670 fused_ordering(845) 00:14:00.670 fused_ordering(846) 00:14:00.670 fused_ordering(847) 00:14:00.670 fused_ordering(848) 00:14:00.670 fused_ordering(849) 00:14:00.670 fused_ordering(850) 00:14:00.670 fused_ordering(851) 00:14:00.670 fused_ordering(852) 00:14:00.670 fused_ordering(853) 00:14:00.670 fused_ordering(854) 00:14:00.670 fused_ordering(855) 00:14:00.670 fused_ordering(856) 00:14:00.670 fused_ordering(857) 00:14:00.670 fused_ordering(858) 00:14:00.670 fused_ordering(859) 00:14:00.670 fused_ordering(860) 00:14:00.670 fused_ordering(861) 00:14:00.670 fused_ordering(862) 00:14:00.670 fused_ordering(863) 00:14:00.670 fused_ordering(864) 00:14:00.670 fused_ordering(865) 00:14:00.670 fused_ordering(866) 00:14:00.670 fused_ordering(867) 00:14:00.670 fused_ordering(868) 00:14:00.670 fused_ordering(869) 00:14:00.670 fused_ordering(870) 00:14:00.670 fused_ordering(871) 00:14:00.670 fused_ordering(872) 00:14:00.670 fused_ordering(873) 00:14:00.670 fused_ordering(874) 00:14:00.670 fused_ordering(875) 00:14:00.670 fused_ordering(876) 00:14:00.670 fused_ordering(877) 00:14:00.670 fused_ordering(878) 00:14:00.670 fused_ordering(879) 00:14:00.670 fused_ordering(880) 00:14:00.670 fused_ordering(881) 00:14:00.670 fused_ordering(882) 00:14:00.670 fused_ordering(883) 00:14:00.670 fused_ordering(884) 00:14:00.670 fused_ordering(885) 00:14:00.670 fused_ordering(886) 00:14:00.670 fused_ordering(887) 00:14:00.670 fused_ordering(888) 00:14:00.670 fused_ordering(889) 00:14:00.670 fused_ordering(890) 00:14:00.670 fused_ordering(891) 00:14:00.670 fused_ordering(892) 00:14:00.670 fused_ordering(893) 00:14:00.670 fused_ordering(894) 00:14:00.670 fused_ordering(895) 00:14:00.670 fused_ordering(896) 00:14:00.670 fused_ordering(897) 00:14:00.670 fused_ordering(898) 00:14:00.670 fused_ordering(899) 00:14:00.670 fused_ordering(900) 00:14:00.670 fused_ordering(901) 00:14:00.670 fused_ordering(902) 00:14:00.670 fused_ordering(903) 00:14:00.670 fused_ordering(904) 00:14:00.670 fused_ordering(905) 00:14:00.670 fused_ordering(906) 00:14:00.670 fused_ordering(907) 00:14:00.671 fused_ordering(908) 00:14:00.671 fused_ordering(909) 00:14:00.671 fused_ordering(910) 00:14:00.671 fused_ordering(911) 00:14:00.671 fused_ordering(912) 00:14:00.671 fused_ordering(913) 00:14:00.671 fused_ordering(914) 00:14:00.671 fused_ordering(915) 00:14:00.671 fused_ordering(916) 00:14:00.671 fused_ordering(917) 00:14:00.671 fused_ordering(918) 00:14:00.671 fused_ordering(919) 00:14:00.671 fused_ordering(920) 00:14:00.671 fused_ordering(921) 00:14:00.671 fused_ordering(922) 00:14:00.671 fused_ordering(923) 00:14:00.671 fused_ordering(924) 00:14:00.671 fused_ordering(925) 00:14:00.671 fused_ordering(926) 00:14:00.671 fused_ordering(927) 00:14:00.671 fused_ordering(928) 00:14:00.671 fused_ordering(929) 00:14:00.671 fused_ordering(930) 00:14:00.671 fused_ordering(931) 00:14:00.671 fused_ordering(932) 00:14:00.671 fused_ordering(933) 00:14:00.671 fused_ordering(934) 00:14:00.671 fused_ordering(935) 00:14:00.671 fused_ordering(936) 00:14:00.671 fused_ordering(937) 00:14:00.671 fused_ordering(938) 00:14:00.671 fused_ordering(939) 00:14:00.671 fused_ordering(940) 00:14:00.671 fused_ordering(941) 00:14:00.671 fused_ordering(942) 00:14:00.671 fused_ordering(943) 00:14:00.671 fused_ordering(944) 00:14:00.671 fused_ordering(945) 00:14:00.671 fused_ordering(946) 00:14:00.671 fused_ordering(947) 00:14:00.671 fused_ordering(948) 00:14:00.671 fused_ordering(949) 00:14:00.671 fused_ordering(950) 00:14:00.671 fused_ordering(951) 00:14:00.671 fused_ordering(952) 00:14:00.671 fused_ordering(953) 00:14:00.671 fused_ordering(954) 00:14:00.671 fused_ordering(955) 00:14:00.671 fused_ordering(956) 00:14:00.671 fused_ordering(957) 00:14:00.671 fused_ordering(958) 00:14:00.671 fused_ordering(959) 00:14:00.671 fused_ordering(960) 00:14:00.671 fused_ordering(961) 00:14:00.671 fused_ordering(962) 00:14:00.671 fused_ordering(963) 00:14:00.671 fused_ordering(964) 00:14:00.671 fused_ordering(965) 00:14:00.671 fused_ordering(966) 00:14:00.671 fused_ordering(967) 00:14:00.671 fused_ordering(968) 00:14:00.671 fused_ordering(969) 00:14:00.671 fused_ordering(970) 00:14:00.671 fused_ordering(971) 00:14:00.671 fused_ordering(972) 00:14:00.671 fused_ordering(973) 00:14:00.671 fused_ordering(974) 00:14:00.671 fused_ordering(975) 00:14:00.671 fused_ordering(976) 00:14:00.671 fused_ordering(977) 00:14:00.671 fused_ordering(978) 00:14:00.671 fused_ordering(979) 00:14:00.671 fused_ordering(980) 00:14:00.671 fused_ordering(981) 00:14:00.671 fused_ordering(982) 00:14:00.671 fused_ordering(983) 00:14:00.671 fused_ordering(984) 00:14:00.671 fused_ordering(985) 00:14:00.671 fused_ordering(986) 00:14:00.671 fused_ordering(987) 00:14:00.671 fused_ordering(988) 00:14:00.671 fused_ordering(989) 00:14:00.671 fused_ordering(990) 00:14:00.671 fused_ordering(991) 00:14:00.671 fused_ordering(992) 00:14:00.671 fused_ordering(993) 00:14:00.671 fused_ordering(994) 00:14:00.671 fused_ordering(995) 00:14:00.671 fused_ordering(996) 00:14:00.671 fused_ordering(997) 00:14:00.671 fused_ordering(998) 00:14:00.671 fused_ordering(999) 00:14:00.671 fused_ordering(1000) 00:14:00.671 fused_ordering(1001) 00:14:00.671 fused_ordering(1002) 00:14:00.671 fused_ordering(1003) 00:14:00.671 fused_ordering(1004) 00:14:00.671 fused_ordering(1005) 00:14:00.671 fused_ordering(1006) 00:14:00.671 fused_ordering(1007) 00:14:00.671 fused_ordering(1008) 00:14:00.671 fused_ordering(1009) 00:14:00.671 fused_ordering(1010) 00:14:00.671 fused_ordering(1011) 00:14:00.671 fused_ordering(1012) 00:14:00.671 fused_ordering(1013) 00:14:00.671 fused_ordering(1014) 00:14:00.671 fused_ordering(1015) 00:14:00.671 fused_ordering(1016) 00:14:00.671 fused_ordering(1017) 00:14:00.671 fused_ordering(1018) 00:14:00.671 fused_ordering(1019) 00:14:00.671 fused_ordering(1020) 00:14:00.671 fused_ordering(1021) 00:14:00.671 fused_ordering(1022) 00:14:00.671 fused_ordering(1023) 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:00.671 rmmod nvme_tcp 00:14:00.671 rmmod nvme_fabrics 00:14:00.671 rmmod nvme_keyring 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2085721 ']' 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2085721 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2085721 ']' 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2085721 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2085721 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2085721' 00:14:00.671 killing process with pid 2085721 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2085721 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2085721 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.671 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:03.216 00:14:03.216 real 0m13.507s 00:14:03.216 user 0m7.133s 00:14:03.216 sys 0m7.288s 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.216 ************************************ 00:14:03.216 END TEST nvmf_fused_ordering 00:14:03.216 ************************************ 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:03.216 ************************************ 00:14:03.216 START TEST nvmf_ns_masking 00:14:03.216 ************************************ 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:03.216 * Looking for test storage... 00:14:03.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:03.216 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:03.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.217 --rc genhtml_branch_coverage=1 00:14:03.217 --rc genhtml_function_coverage=1 00:14:03.217 --rc genhtml_legend=1 00:14:03.217 --rc geninfo_all_blocks=1 00:14:03.217 --rc geninfo_unexecuted_blocks=1 00:14:03.217 00:14:03.217 ' 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:03.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.217 --rc genhtml_branch_coverage=1 00:14:03.217 --rc genhtml_function_coverage=1 00:14:03.217 --rc genhtml_legend=1 00:14:03.217 --rc geninfo_all_blocks=1 00:14:03.217 --rc geninfo_unexecuted_blocks=1 00:14:03.217 00:14:03.217 ' 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:03.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.217 --rc genhtml_branch_coverage=1 00:14:03.217 --rc genhtml_function_coverage=1 00:14:03.217 --rc genhtml_legend=1 00:14:03.217 --rc geninfo_all_blocks=1 00:14:03.217 --rc geninfo_unexecuted_blocks=1 00:14:03.217 00:14:03.217 ' 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:03.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.217 --rc genhtml_branch_coverage=1 00:14:03.217 --rc genhtml_function_coverage=1 00:14:03.217 --rc genhtml_legend=1 00:14:03.217 --rc geninfo_all_blocks=1 00:14:03.217 --rc geninfo_unexecuted_blocks=1 00:14:03.217 00:14:03.217 ' 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:03.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ed55dee8-0aab-4e46-b30d-db919ca4fe71 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4fdcbf4a-83f6-4dc2-b2e3-90d9b8d177f0 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=9cfbcadc-79fe-4387-b24c-13b5bc3857e8 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.217 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:03.218 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:03.218 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:03.218 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.355 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:11.356 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:11.356 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:11.356 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:11.356 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.356 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.356 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:11.356 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.356 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.356 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.356 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:11.356 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:11.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:14:11.356 00:14:11.356 --- 10.0.0.2 ping statistics --- 00:14:11.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.356 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:14:11.356 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:14:11.356 00:14:11.357 --- 10.0.0.1 ping statistics --- 00:14:11.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.357 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2090662 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2090662 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2090662 ']' 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.357 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.357 [2024-12-06 13:21:57.243708] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:14:11.357 [2024-12-06 13:21:57.243772] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.357 [2024-12-06 13:21:57.341692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.357 [2024-12-06 13:21:57.393336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.357 [2024-12-06 13:21:57.393386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.357 [2024-12-06 13:21:57.393395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.357 [2024-12-06 13:21:57.393402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.357 [2024-12-06 13:21:57.393409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.357 [2024-12-06 13:21:57.394195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.618 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.618 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:11.618 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:11.618 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:11.618 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.618 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.618 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:11.618 [2024-12-06 13:21:58.265644] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.880 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:11.880 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:11.880 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:11.880 Malloc1 00:14:11.880 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:12.141 Malloc2 00:14:12.141 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:12.403 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:12.403 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.663 [2024-12-06 13:21:59.205201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.663 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:12.663 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9cfbcadc-79fe-4387-b24c-13b5bc3857e8 -a 10.0.0.2 -s 4420 -i 4 00:14:12.924 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.924 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:12.924 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.925 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:12.925 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:14.837 [ 0]:0x1 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:14.837 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.097 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6887c327b9b345e6ac1155d1602ac7ec 00:14:15.097 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6887c327b9b345e6ac1155d1602ac7ec != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.097 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:15.097 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:15.097 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.097 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.097 [ 0]:0x1 00:14:15.097 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.097 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.097 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6887c327b9b345e6ac1155d1602ac7ec 00:14:15.097 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6887c327b9b345e6ac1155d1602ac7ec != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.097 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:15.357 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.357 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.357 [ 1]:0x2 00:14:15.357 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.357 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.357 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=416b8590b2a64d3bb314a5ab33208103 00:14:15.357 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 416b8590b2a64d3bb314a5ab33208103 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.357 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:15.357 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.357 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.617 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:15.878 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:15.878 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9cfbcadc-79fe-4387-b24c-13b5bc3857e8 -a 10.0.0.2 -s 4420 -i 4 00:14:15.878 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:15.878 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:15.878 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:15.878 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:15.878 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:15.878 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:18.417 [ 0]:0x2 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=416b8590b2a64d3bb314a5ab33208103 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 416b8590b2a64d3bb314a5ab33208103 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:18.417 [ 0]:0x1 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6887c327b9b345e6ac1155d1602ac7ec 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6887c327b9b345e6ac1155d1602ac7ec != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:18.417 [ 1]:0x2 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=416b8590b2a64d3bb314a5ab33208103 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 416b8590b2a64d3bb314a5ab33208103 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.417 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:18.677 [ 0]:0x2 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=416b8590b2a64d3bb314a5ab33208103 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 416b8590b2a64d3bb314a5ab33208103 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.677 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:18.937 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:18.937 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9cfbcadc-79fe-4387-b24c-13b5bc3857e8 -a 10.0.0.2 -s 4420 -i 4 00:14:19.197 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:19.197 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:19.197 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.197 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:19.197 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:19.197 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:21.108 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:21.108 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:21.108 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.108 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:21.108 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.108 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:21.108 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:21.108 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:21.108 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:21.108 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:21.108 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:21.108 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.108 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:21.367 [ 0]:0x1 00:14:21.367 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.367 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.367 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6887c327b9b345e6ac1155d1602ac7ec 00:14:21.367 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6887c327b9b345e6ac1155d1602ac7ec != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.367 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:21.367 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.367 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:21.367 [ 1]:0x2 00:14:21.367 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:21.367 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.367 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=416b8590b2a64d3bb314a5ab33208103 00:14:21.367 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 416b8590b2a64d3bb314a5ab33208103 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.367 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:21.685 [ 0]:0x2 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:21.685 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=416b8590b2a64d3bb314a5ab33208103 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 416b8590b2a64d3bb314a5ab33208103 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:21.946 [2024-12-06 13:22:08.462816] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:21.946 request: 00:14:21.946 { 00:14:21.946 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.946 "nsid": 2, 00:14:21.946 "host": "nqn.2016-06.io.spdk:host1", 00:14:21.946 "method": "nvmf_ns_remove_host", 00:14:21.946 "req_id": 1 00:14:21.946 } 00:14:21.946 Got JSON-RPC error response 00:14:21.946 response: 00:14:21.946 { 00:14:21.946 "code": -32602, 00:14:21.946 "message": "Invalid parameters" 00:14:21.946 } 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.946 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:21.947 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.947 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.947 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.947 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:21.947 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.947 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:21.947 [ 0]:0x2 00:14:22.206 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.206 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.206 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=416b8590b2a64d3bb314a5ab33208103 00:14:22.206 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 416b8590b2a64d3bb314a5ab33208103 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.206 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:22.206 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.206 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2093128 00:14:22.207 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.207 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:22.207 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2093128 /var/tmp/host.sock 00:14:22.207 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2093128 ']' 00:14:22.207 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:22.207 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.207 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:22.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:22.207 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.207 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:22.207 [2024-12-06 13:22:08.844947] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:14:22.207 [2024-12-06 13:22:08.844997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093128 ] 00:14:22.466 [2024-12-06 13:22:08.931572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.466 [2024-12-06 13:22:08.967413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.034 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.034 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:23.034 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.294 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:23.553 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ed55dee8-0aab-4e46-b30d-db919ca4fe71 00:14:23.553 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:23.553 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g ED55DEE80AAB4E46B30DDB919CA4FE71 -i 00:14:23.553 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4fdcbf4a-83f6-4dc2-b2e3-90d9b8d177f0 00:14:23.553 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:23.553 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4FDCBF4A83F64DC2B2E390D9B8D177F0 -i 00:14:23.814 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:24.073 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:24.073 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:24.073 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:24.332 nvme0n1 00:14:24.672 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:24.672 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:24.672 nvme1n2 00:14:24.978 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:24.978 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:24.978 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:24.978 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:24.978 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:24.978 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:24.978 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:24.978 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:24.978 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:25.247 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ed55dee8-0aab-4e46-b30d-db919ca4fe71 == \e\d\5\5\d\e\e\8\-\0\a\a\b\-\4\e\4\6\-\b\3\0\d\-\d\b\9\1\9\c\a\4\f\e\7\1 ]] 00:14:25.247 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:25.247 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:25.247 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:25.247 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4fdcbf4a-83f6-4dc2-b2e3-90d9b8d177f0 == \4\f\d\c\b\f\4\a\-\8\3\f\6\-\4\d\c\2\-\b\2\e\3\-\9\0\d\9\b\8\d\1\7\7\f\0 ]] 00:14:25.247 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.508 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid ed55dee8-0aab-4e46-b30d-db919ca4fe71 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g ED55DEE80AAB4E46B30DDB919CA4FE71 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g ED55DEE80AAB4E46B30DDB919CA4FE71 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g ED55DEE80AAB4E46B30DDB919CA4FE71 00:14:25.769 [2024-12-06 13:22:12.405193] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:25.769 [2024-12-06 13:22:12.405221] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:25.769 [2024-12-06 13:22:12.405232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.769 request: 00:14:25.769 { 00:14:25.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.769 "namespace": { 00:14:25.769 "bdev_name": "invalid", 00:14:25.769 "nsid": 1, 00:14:25.769 "nguid": "ED55DEE80AAB4E46B30DDB919CA4FE71", 00:14:25.769 "no_auto_visible": false, 00:14:25.769 "hide_metadata": false 00:14:25.769 }, 00:14:25.769 "method": "nvmf_subsystem_add_ns", 00:14:25.769 "req_id": 1 00:14:25.769 } 00:14:25.769 Got JSON-RPC error response 00:14:25.769 response: 00:14:25.769 { 00:14:25.769 "code": -32602, 00:14:25.769 "message": "Invalid parameters" 00:14:25.769 } 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid ed55dee8-0aab-4e46-b30d-db919ca4fe71 00:14:25.769 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:26.030 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g ED55DEE80AAB4E46B30DDB919CA4FE71 -i 00:14:26.030 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2093128 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2093128 ']' 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2093128 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2093128 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2093128' 00:14:28.576 killing process with pid 2093128 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2093128 00:14:28.576 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2093128 00:14:28.576 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.836 rmmod nvme_tcp 00:14:28.836 rmmod nvme_fabrics 00:14:28.836 rmmod nvme_keyring 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2090662 ']' 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2090662 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2090662 ']' 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2090662 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2090662 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2090662' 00:14:28.836 killing process with pid 2090662 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2090662 00:14:28.836 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2090662 00:14:29.096 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:29.096 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:29.096 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:29.096 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:29.096 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:29.096 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:29.096 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:29.096 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:29.096 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:29.096 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.096 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.096 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.008 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:31.008 00:14:31.008 real 0m28.185s 00:14:31.008 user 0m31.951s 00:14:31.008 sys 0m8.105s 00:14:31.008 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.008 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:31.008 ************************************ 00:14:31.008 END TEST nvmf_ns_masking 00:14:31.008 ************************************ 00:14:31.008 13:22:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:31.008 13:22:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:31.008 13:22:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:31.008 13:22:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.008 13:22:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.270 ************************************ 00:14:31.270 START TEST nvmf_nvme_cli 00:14:31.270 ************************************ 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:31.270 * Looking for test storage... 00:14:31.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:31.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.270 --rc genhtml_branch_coverage=1 00:14:31.270 --rc genhtml_function_coverage=1 00:14:31.270 --rc genhtml_legend=1 00:14:31.270 --rc geninfo_all_blocks=1 00:14:31.270 --rc geninfo_unexecuted_blocks=1 00:14:31.270 00:14:31.270 ' 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:31.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.270 --rc genhtml_branch_coverage=1 00:14:31.270 --rc genhtml_function_coverage=1 00:14:31.270 --rc genhtml_legend=1 00:14:31.270 --rc geninfo_all_blocks=1 00:14:31.270 --rc geninfo_unexecuted_blocks=1 00:14:31.270 00:14:31.270 ' 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:31.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.270 --rc genhtml_branch_coverage=1 00:14:31.270 --rc genhtml_function_coverage=1 00:14:31.270 --rc genhtml_legend=1 00:14:31.270 --rc geninfo_all_blocks=1 00:14:31.270 --rc geninfo_unexecuted_blocks=1 00:14:31.270 00:14:31.270 ' 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:31.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.270 --rc genhtml_branch_coverage=1 00:14:31.270 --rc genhtml_function_coverage=1 00:14:31.270 --rc genhtml_legend=1 00:14:31.270 --rc geninfo_all_blocks=1 00:14:31.270 --rc geninfo_unexecuted_blocks=1 00:14:31.270 00:14:31.270 ' 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.270 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:31.271 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:39.414 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:39.414 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:39.414 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:39.414 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:39.414 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:39.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:14:39.415 00:14:39.415 --- 10.0.0.2 ping statistics --- 00:14:39.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.415 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:39.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:14:39.415 00:14:39.415 --- 10.0.0.1 ping statistics --- 00:14:39.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.415 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2098569 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2098569 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2098569 ']' 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.415 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.415 [2024-12-06 13:22:25.521762] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:14:39.415 [2024-12-06 13:22:25.521826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.415 [2024-12-06 13:22:25.621919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:39.415 [2024-12-06 13:22:25.676446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.415 [2024-12-06 13:22:25.676510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.415 [2024-12-06 13:22:25.676519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.415 [2024-12-06 13:22:25.676527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.415 [2024-12-06 13:22:25.676533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.415 [2024-12-06 13:22:25.678601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.415 [2024-12-06 13:22:25.678761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.415 [2024-12-06 13:22:25.678921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.415 [2024-12-06 13:22:25.678922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.986 [2024-12-06 13:22:26.396393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.986 Malloc0 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:39.986 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.987 Malloc1 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.987 [2024-12-06 13:22:26.508764] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.987 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:40.249 00:14:40.249 Discovery Log Number of Records 2, Generation counter 2 00:14:40.249 =====Discovery Log Entry 0====== 00:14:40.249 trtype: tcp 00:14:40.249 adrfam: ipv4 00:14:40.249 subtype: current discovery subsystem 00:14:40.249 treq: not required 00:14:40.249 portid: 0 00:14:40.249 trsvcid: 4420 00:14:40.249 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:40.249 traddr: 10.0.0.2 00:14:40.249 eflags: explicit discovery connections, duplicate discovery information 00:14:40.249 sectype: none 00:14:40.249 =====Discovery Log Entry 1====== 00:14:40.249 trtype: tcp 00:14:40.249 adrfam: ipv4 00:14:40.249 subtype: nvme subsystem 00:14:40.249 treq: not required 00:14:40.249 portid: 0 00:14:40.249 trsvcid: 4420 00:14:40.249 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:40.249 traddr: 10.0.0.2 00:14:40.249 eflags: none 00:14:40.249 sectype: none 00:14:40.249 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:40.249 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:40.249 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:40.249 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.249 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:40.249 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:40.249 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.249 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:40.249 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.249 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:40.249 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:41.635 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:41.635 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:41.635 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:41.635 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:41.635 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:41.635 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:43.545 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:43.545 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:43.545 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:43.545 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:43.545 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:43.545 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:43.545 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:43.545 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:43.545 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.545 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:43.805 /dev/nvme0n2 ]] 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.805 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:44.065 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:44.065 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:44.065 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:44.065 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:44.065 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:44.065 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:44.065 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:44.065 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:44.065 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:44.065 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:44.065 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:44.065 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:44.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:44.324 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:44.324 rmmod nvme_tcp 00:14:44.324 rmmod nvme_fabrics 00:14:44.325 rmmod nvme_keyring 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2098569 ']' 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2098569 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2098569 ']' 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2098569 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2098569 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2098569' 00:14:44.325 killing process with pid 2098569 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2098569 00:14:44.325 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2098569 00:14:44.585 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:44.585 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:44.585 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:44.585 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:44.585 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:44.585 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:44.585 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:44.585 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:44.585 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:44.585 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.585 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.585 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.124 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:47.124 00:14:47.124 real 0m15.483s 00:14:47.124 user 0m23.792s 00:14:47.124 sys 0m6.387s 00:14:47.124 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.124 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.124 ************************************ 00:14:47.124 END TEST nvmf_nvme_cli 00:14:47.124 ************************************ 00:14:47.124 13:22:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:47.124 13:22:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:47.124 13:22:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:47.124 13:22:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.124 13:22:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:47.124 ************************************ 00:14:47.124 START TEST nvmf_vfio_user 00:14:47.124 ************************************ 00:14:47.124 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:47.124 * Looking for test storage... 00:14:47.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.124 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:47.124 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:47.124 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:47.124 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:47.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.125 --rc genhtml_branch_coverage=1 00:14:47.125 --rc genhtml_function_coverage=1 00:14:47.125 --rc genhtml_legend=1 00:14:47.125 --rc geninfo_all_blocks=1 00:14:47.125 --rc geninfo_unexecuted_blocks=1 00:14:47.125 00:14:47.125 ' 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:47.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.125 --rc genhtml_branch_coverage=1 00:14:47.125 --rc genhtml_function_coverage=1 00:14:47.125 --rc genhtml_legend=1 00:14:47.125 --rc geninfo_all_blocks=1 00:14:47.125 --rc geninfo_unexecuted_blocks=1 00:14:47.125 00:14:47.125 ' 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:47.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.125 --rc genhtml_branch_coverage=1 00:14:47.125 --rc genhtml_function_coverage=1 00:14:47.125 --rc genhtml_legend=1 00:14:47.125 --rc geninfo_all_blocks=1 00:14:47.125 --rc geninfo_unexecuted_blocks=1 00:14:47.125 00:14:47.125 ' 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:47.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.125 --rc genhtml_branch_coverage=1 00:14:47.125 --rc genhtml_function_coverage=1 00:14:47.125 --rc genhtml_legend=1 00:14:47.125 --rc geninfo_all_blocks=1 00:14:47.125 --rc geninfo_unexecuted_blocks=1 00:14:47.125 00:14:47.125 ' 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:47.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:47.125 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2100326 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2100326' 00:14:47.126 Process pid: 2100326 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2100326 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2100326 ']' 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.126 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:47.126 [2024-12-06 13:22:33.534332] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:14:47.126 [2024-12-06 13:22:33.534404] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.126 [2024-12-06 13:22:33.623984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.126 [2024-12-06 13:22:33.654352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.126 [2024-12-06 13:22:33.654379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.126 [2024-12-06 13:22:33.654385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.126 [2024-12-06 13:22:33.654390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.126 [2024-12-06 13:22:33.654394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.126 [2024-12-06 13:22:33.655585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.126 [2024-12-06 13:22:33.655821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.126 [2024-12-06 13:22:33.655975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.126 [2024-12-06 13:22:33.655976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.695 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.695 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:47.695 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:49.076 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:49.076 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:49.076 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:49.076 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.076 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:49.076 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:49.076 Malloc1 00:14:49.335 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:49.335 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:49.594 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:49.855 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.855 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:49.855 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:49.855 Malloc2 00:14:49.855 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:50.115 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:50.376 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:50.639 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:50.639 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:50.639 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:50.639 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:50.639 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:50.639 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:50.639 [2024-12-06 13:22:37.074282] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:14:50.639 [2024-12-06 13:22:37.074327] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2101081 ] 00:14:50.639 [2024-12-06 13:22:37.113740] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:50.639 [2024-12-06 13:22:37.119025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:50.639 [2024-12-06 13:22:37.119045] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1da8c40000 00:14:50.639 [2024-12-06 13:22:37.120026] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.639 [2024-12-06 13:22:37.121033] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.639 [2024-12-06 13:22:37.122040] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.639 [2024-12-06 13:22:37.123049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:50.639 [2024-12-06 13:22:37.124050] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:50.639 [2024-12-06 13:22:37.125054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.639 [2024-12-06 13:22:37.126055] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:50.640 [2024-12-06 13:22:37.127063] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.640 [2024-12-06 13:22:37.128072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:50.640 [2024-12-06 13:22:37.128078] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1da8c35000 00:14:50.640 [2024-12-06 13:22:37.128990] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:50.640 [2024-12-06 13:22:37.138433] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:50.640 [2024-12-06 13:22:37.138452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:50.640 [2024-12-06 13:22:37.144170] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:50.640 [2024-12-06 13:22:37.144206] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:50.640 [2024-12-06 13:22:37.144270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:50.640 [2024-12-06 13:22:37.144282] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:50.640 [2024-12-06 13:22:37.144287] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:50.640 [2024-12-06 13:22:37.145173] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:50.640 [2024-12-06 13:22:37.145180] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:50.640 [2024-12-06 13:22:37.145186] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:50.640 [2024-12-06 13:22:37.146177] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:50.640 [2024-12-06 13:22:37.146183] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:50.640 [2024-12-06 13:22:37.146189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:50.640 [2024-12-06 13:22:37.147182] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:50.640 [2024-12-06 13:22:37.147189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:50.640 [2024-12-06 13:22:37.148193] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:50.640 [2024-12-06 13:22:37.148199] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:50.640 [2024-12-06 13:22:37.148203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:50.640 [2024-12-06 13:22:37.148208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:50.640 [2024-12-06 13:22:37.148314] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:50.640 [2024-12-06 13:22:37.148317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:50.640 [2024-12-06 13:22:37.148321] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:50.640 [2024-12-06 13:22:37.149200] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:50.640 [2024-12-06 13:22:37.150207] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:50.640 [2024-12-06 13:22:37.151218] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:50.640 [2024-12-06 13:22:37.152220] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:50.640 [2024-12-06 13:22:37.152268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:50.640 [2024-12-06 13:22:37.153229] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:50.640 [2024-12-06 13:22:37.153235] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:50.640 [2024-12-06 13:22:37.153238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:50.640 [2024-12-06 13:22:37.153253] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:50.640 [2024-12-06 13:22:37.153259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:50.640 [2024-12-06 13:22:37.153271] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:50.640 [2024-12-06 13:22:37.153275] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.640 [2024-12-06 13:22:37.153278] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.640 [2024-12-06 13:22:37.153289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.640 [2024-12-06 13:22:37.153327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:50.640 [2024-12-06 13:22:37.153335] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:50.640 [2024-12-06 13:22:37.153342] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:50.640 [2024-12-06 13:22:37.153345] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:50.640 [2024-12-06 13:22:37.153348] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:50.640 [2024-12-06 13:22:37.153352] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:50.640 [2024-12-06 13:22:37.153355] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:50.640 [2024-12-06 13:22:37.153359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:50.640 [2024-12-06 13:22:37.153364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:50.640 [2024-12-06 13:22:37.153372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:50.640 [2024-12-06 13:22:37.153386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:50.640 [2024-12-06 13:22:37.153395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.640 [2024-12-06 13:22:37.153401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.640 [2024-12-06 13:22:37.153407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.640 [2024-12-06 13:22:37.153413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.640 [2024-12-06 13:22:37.153416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:50.640 [2024-12-06 13:22:37.153423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:50.640 [2024-12-06 13:22:37.153429] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:50.640 [2024-12-06 13:22:37.153441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:50.640 [2024-12-06 13:22:37.153445] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:50.640 [2024-12-06 13:22:37.153448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:50.640 [2024-12-06 13:22:37.153456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:50.640 [2024-12-06 13:22:37.153461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:50.640 [2024-12-06 13:22:37.153468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:50.640 [2024-12-06 13:22:37.153478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:50.640 [2024-12-06 13:22:37.153522] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:50.640 [2024-12-06 13:22:37.153528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:50.640 [2024-12-06 13:22:37.153535] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:50.640 [2024-12-06 13:22:37.153538] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:50.640 [2024-12-06 13:22:37.153541] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.640 [2024-12-06 13:22:37.153545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:50.640 [2024-12-06 13:22:37.153559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:50.640 [2024-12-06 13:22:37.153567] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:50.640 [2024-12-06 13:22:37.153577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:50.640 [2024-12-06 13:22:37.153583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:50.640 [2024-12-06 13:22:37.153588] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:50.640 [2024-12-06 13:22:37.153591] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.640 [2024-12-06 13:22:37.153593] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.640 [2024-12-06 13:22:37.153598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.641 [2024-12-06 13:22:37.153616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:50.641 [2024-12-06 13:22:37.153627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:50.641 [2024-12-06 13:22:37.153633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:50.641 [2024-12-06 13:22:37.153637] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:50.641 [2024-12-06 13:22:37.153640] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.641 [2024-12-06 13:22:37.153643] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.641 [2024-12-06 13:22:37.153647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.641 [2024-12-06 13:22:37.153655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:50.641 [2024-12-06 13:22:37.153661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:50.641 [2024-12-06 13:22:37.153666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:50.641 [2024-12-06 13:22:37.153672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:50.641 [2024-12-06 13:22:37.153677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:50.641 [2024-12-06 13:22:37.153681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:50.641 [2024-12-06 13:22:37.153685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:50.641 [2024-12-06 13:22:37.153690] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:50.641 [2024-12-06 13:22:37.153694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:50.641 [2024-12-06 13:22:37.153697] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:50.641 [2024-12-06 13:22:37.153712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:50.641 [2024-12-06 13:22:37.153721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:50.641 [2024-12-06 13:22:37.153730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:50.641 [2024-12-06 13:22:37.153738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:50.641 [2024-12-06 13:22:37.153746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:50.641 [2024-12-06 13:22:37.153758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:50.641 [2024-12-06 13:22:37.153766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:50.641 [2024-12-06 13:22:37.153778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:50.641 [2024-12-06 13:22:37.153787] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:50.641 [2024-12-06 13:22:37.153791] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:50.641 [2024-12-06 13:22:37.153793] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:50.641 [2024-12-06 13:22:37.153796] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:50.641 [2024-12-06 13:22:37.153798] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:50.641 [2024-12-06 13:22:37.153803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:50.641 [2024-12-06 13:22:37.153808] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:50.641 [2024-12-06 13:22:37.153811] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:50.641 [2024-12-06 13:22:37.153813] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.641 [2024-12-06 13:22:37.153818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:50.641 [2024-12-06 13:22:37.153823] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:50.641 [2024-12-06 13:22:37.153825] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.641 [2024-12-06 13:22:37.153828] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.641 [2024-12-06 13:22:37.153832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.641 [2024-12-06 13:22:37.153838] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:50.641 [2024-12-06 13:22:37.153841] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:50.641 [2024-12-06 13:22:37.153843] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.641 [2024-12-06 13:22:37.153848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:50.641 [2024-12-06 13:22:37.153853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:50.641 [2024-12-06 13:22:37.153862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:50.641 [2024-12-06 13:22:37.153869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:50.641 [2024-12-06 13:22:37.153874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:50.641 ===================================================== 00:14:50.641 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:50.641 ===================================================== 00:14:50.641 Controller Capabilities/Features 00:14:50.641 ================================ 00:14:50.641 Vendor ID: 4e58 00:14:50.641 Subsystem Vendor ID: 4e58 00:14:50.641 Serial Number: SPDK1 00:14:50.641 Model Number: SPDK bdev Controller 00:14:50.641 Firmware Version: 25.01 00:14:50.641 Recommended Arb Burst: 6 00:14:50.641 IEEE OUI Identifier: 8d 6b 50 00:14:50.641 Multi-path I/O 00:14:50.641 May have multiple subsystem ports: Yes 00:14:50.641 May have multiple controllers: Yes 00:14:50.641 Associated with SR-IOV VF: No 00:14:50.641 Max Data Transfer Size: 131072 00:14:50.641 Max Number of Namespaces: 32 00:14:50.641 Max Number of I/O Queues: 127 00:14:50.641 NVMe Specification Version (VS): 1.3 00:14:50.641 NVMe Specification Version (Identify): 1.3 00:14:50.641 Maximum Queue Entries: 256 00:14:50.641 Contiguous Queues Required: Yes 00:14:50.641 Arbitration Mechanisms Supported 00:14:50.641 Weighted Round Robin: Not Supported 00:14:50.641 Vendor Specific: Not Supported 00:14:50.641 Reset Timeout: 15000 ms 00:14:50.641 Doorbell Stride: 4 bytes 00:14:50.641 NVM Subsystem Reset: Not Supported 00:14:50.641 Command Sets Supported 00:14:50.641 NVM Command Set: Supported 00:14:50.641 Boot Partition: Not Supported 00:14:50.641 Memory Page Size Minimum: 4096 bytes 00:14:50.641 Memory Page Size Maximum: 4096 bytes 00:14:50.641 Persistent Memory Region: Not Supported 00:14:50.641 Optional Asynchronous Events Supported 00:14:50.641 Namespace Attribute Notices: Supported 00:14:50.641 Firmware Activation Notices: Not Supported 00:14:50.641 ANA Change Notices: Not Supported 00:14:50.641 PLE Aggregate Log Change Notices: Not Supported 00:14:50.641 LBA Status Info Alert Notices: Not Supported 00:14:50.641 EGE Aggregate Log Change Notices: Not Supported 00:14:50.641 Normal NVM Subsystem Shutdown event: Not Supported 00:14:50.641 Zone Descriptor Change Notices: Not Supported 00:14:50.641 Discovery Log Change Notices: Not Supported 00:14:50.641 Controller Attributes 00:14:50.641 128-bit Host Identifier: Supported 00:14:50.641 Non-Operational Permissive Mode: Not Supported 00:14:50.641 NVM Sets: Not Supported 00:14:50.641 Read Recovery Levels: Not Supported 00:14:50.641 Endurance Groups: Not Supported 00:14:50.641 Predictable Latency Mode: Not Supported 00:14:50.641 Traffic Based Keep ALive: Not Supported 00:14:50.641 Namespace Granularity: Not Supported 00:14:50.641 SQ Associations: Not Supported 00:14:50.641 UUID List: Not Supported 00:14:50.641 Multi-Domain Subsystem: Not Supported 00:14:50.641 Fixed Capacity Management: Not Supported 00:14:50.641 Variable Capacity Management: Not Supported 00:14:50.641 Delete Endurance Group: Not Supported 00:14:50.641 Delete NVM Set: Not Supported 00:14:50.641 Extended LBA Formats Supported: Not Supported 00:14:50.641 Flexible Data Placement Supported: Not Supported 00:14:50.641 00:14:50.641 Controller Memory Buffer Support 00:14:50.641 ================================ 00:14:50.641 Supported: No 00:14:50.641 00:14:50.641 Persistent Memory Region Support 00:14:50.641 ================================ 00:14:50.641 Supported: No 00:14:50.641 00:14:50.641 Admin Command Set Attributes 00:14:50.641 ============================ 00:14:50.641 Security Send/Receive: Not Supported 00:14:50.641 Format NVM: Not Supported 00:14:50.641 Firmware Activate/Download: Not Supported 00:14:50.641 Namespace Management: Not Supported 00:14:50.641 Device Self-Test: Not Supported 00:14:50.641 Directives: Not Supported 00:14:50.641 NVMe-MI: Not Supported 00:14:50.641 Virtualization Management: Not Supported 00:14:50.641 Doorbell Buffer Config: Not Supported 00:14:50.641 Get LBA Status Capability: Not Supported 00:14:50.641 Command & Feature Lockdown Capability: Not Supported 00:14:50.642 Abort Command Limit: 4 00:14:50.642 Async Event Request Limit: 4 00:14:50.642 Number of Firmware Slots: N/A 00:14:50.642 Firmware Slot 1 Read-Only: N/A 00:14:50.642 Firmware Activation Without Reset: N/A 00:14:50.642 Multiple Update Detection Support: N/A 00:14:50.642 Firmware Update Granularity: No Information Provided 00:14:50.642 Per-Namespace SMART Log: No 00:14:50.642 Asymmetric Namespace Access Log Page: Not Supported 00:14:50.642 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:50.642 Command Effects Log Page: Supported 00:14:50.642 Get Log Page Extended Data: Supported 00:14:50.642 Telemetry Log Pages: Not Supported 00:14:50.642 Persistent Event Log Pages: Not Supported 00:14:50.642 Supported Log Pages Log Page: May Support 00:14:50.642 Commands Supported & Effects Log Page: Not Supported 00:14:50.642 Feature Identifiers & Effects Log Page:May Support 00:14:50.642 NVMe-MI Commands & Effects Log Page: May Support 00:14:50.642 Data Area 4 for Telemetry Log: Not Supported 00:14:50.642 Error Log Page Entries Supported: 128 00:14:50.642 Keep Alive: Supported 00:14:50.642 Keep Alive Granularity: 10000 ms 00:14:50.642 00:14:50.642 NVM Command Set Attributes 00:14:50.642 ========================== 00:14:50.642 Submission Queue Entry Size 00:14:50.642 Max: 64 00:14:50.642 Min: 64 00:14:50.642 Completion Queue Entry Size 00:14:50.642 Max: 16 00:14:50.642 Min: 16 00:14:50.642 Number of Namespaces: 32 00:14:50.642 Compare Command: Supported 00:14:50.642 Write Uncorrectable Command: Not Supported 00:14:50.642 Dataset Management Command: Supported 00:14:50.642 Write Zeroes Command: Supported 00:14:50.642 Set Features Save Field: Not Supported 00:14:50.642 Reservations: Not Supported 00:14:50.642 Timestamp: Not Supported 00:14:50.642 Copy: Supported 00:14:50.642 Volatile Write Cache: Present 00:14:50.642 Atomic Write Unit (Normal): 1 00:14:50.642 Atomic Write Unit (PFail): 1 00:14:50.642 Atomic Compare & Write Unit: 1 00:14:50.642 Fused Compare & Write: Supported 00:14:50.642 Scatter-Gather List 00:14:50.642 SGL Command Set: Supported (Dword aligned) 00:14:50.642 SGL Keyed: Not Supported 00:14:50.642 SGL Bit Bucket Descriptor: Not Supported 00:14:50.642 SGL Metadata Pointer: Not Supported 00:14:50.642 Oversized SGL: Not Supported 00:14:50.642 SGL Metadata Address: Not Supported 00:14:50.642 SGL Offset: Not Supported 00:14:50.642 Transport SGL Data Block: Not Supported 00:14:50.642 Replay Protected Memory Block: Not Supported 00:14:50.642 00:14:50.642 Firmware Slot Information 00:14:50.642 ========================= 00:14:50.642 Active slot: 1 00:14:50.642 Slot 1 Firmware Revision: 25.01 00:14:50.642 00:14:50.642 00:14:50.642 Commands Supported and Effects 00:14:50.642 ============================== 00:14:50.642 Admin Commands 00:14:50.642 -------------- 00:14:50.642 Get Log Page (02h): Supported 00:14:50.642 Identify (06h): Supported 00:14:50.642 Abort (08h): Supported 00:14:50.642 Set Features (09h): Supported 00:14:50.642 Get Features (0Ah): Supported 00:14:50.642 Asynchronous Event Request (0Ch): Supported 00:14:50.642 Keep Alive (18h): Supported 00:14:50.642 I/O Commands 00:14:50.642 ------------ 00:14:50.642 Flush (00h): Supported LBA-Change 00:14:50.642 Write (01h): Supported LBA-Change 00:14:50.642 Read (02h): Supported 00:14:50.642 Compare (05h): Supported 00:14:50.642 Write Zeroes (08h): Supported LBA-Change 00:14:50.642 Dataset Management (09h): Supported LBA-Change 00:14:50.642 Copy (19h): Supported LBA-Change 00:14:50.642 00:14:50.642 Error Log 00:14:50.642 ========= 00:14:50.642 00:14:50.642 Arbitration 00:14:50.642 =========== 00:14:50.642 Arbitration Burst: 1 00:14:50.642 00:14:50.642 Power Management 00:14:50.642 ================ 00:14:50.642 Number of Power States: 1 00:14:50.642 Current Power State: Power State #0 00:14:50.642 Power State #0: 00:14:50.642 Max Power: 0.00 W 00:14:50.642 Non-Operational State: Operational 00:14:50.642 Entry Latency: Not Reported 00:14:50.642 Exit Latency: Not Reported 00:14:50.642 Relative Read Throughput: 0 00:14:50.642 Relative Read Latency: 0 00:14:50.642 Relative Write Throughput: 0 00:14:50.642 Relative Write Latency: 0 00:14:50.642 Idle Power: Not Reported 00:14:50.642 Active Power: Not Reported 00:14:50.642 Non-Operational Permissive Mode: Not Supported 00:14:50.642 00:14:50.642 Health Information 00:14:50.642 ================== 00:14:50.642 Critical Warnings: 00:14:50.642 Available Spare Space: OK 00:14:50.642 Temperature: OK 00:14:50.642 Device Reliability: OK 00:14:50.642 Read Only: No 00:14:50.642 Volatile Memory Backup: OK 00:14:50.642 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:50.642 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:50.642 Available Spare: 0% 00:14:50.642 Available Sp[2024-12-06 13:22:37.153948] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:50.642 [2024-12-06 13:22:37.153960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:50.642 [2024-12-06 13:22:37.153982] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:50.642 [2024-12-06 13:22:37.153989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.642 [2024-12-06 13:22:37.153993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.642 [2024-12-06 13:22:37.153998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.642 [2024-12-06 13:22:37.154002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.642 [2024-12-06 13:22:37.157460] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:50.642 [2024-12-06 13:22:37.157469] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:50.642 [2024-12-06 13:22:37.158256] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:50.642 [2024-12-06 13:22:37.158297] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:50.642 [2024-12-06 13:22:37.158302] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:50.642 [2024-12-06 13:22:37.159270] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:50.642 [2024-12-06 13:22:37.159278] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:50.642 [2024-12-06 13:22:37.159328] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:50.642 [2024-12-06 13:22:37.160286] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:50.642 are Threshold: 0% 00:14:50.642 Life Percentage Used: 0% 00:14:50.642 Data Units Read: 0 00:14:50.642 Data Units Written: 0 00:14:50.642 Host Read Commands: 0 00:14:50.642 Host Write Commands: 0 00:14:50.642 Controller Busy Time: 0 minutes 00:14:50.642 Power Cycles: 0 00:14:50.642 Power On Hours: 0 hours 00:14:50.642 Unsafe Shutdowns: 0 00:14:50.642 Unrecoverable Media Errors: 0 00:14:50.642 Lifetime Error Log Entries: 0 00:14:50.642 Warning Temperature Time: 0 minutes 00:14:50.642 Critical Temperature Time: 0 minutes 00:14:50.642 00:14:50.642 Number of Queues 00:14:50.642 ================ 00:14:50.642 Number of I/O Submission Queues: 127 00:14:50.642 Number of I/O Completion Queues: 127 00:14:50.642 00:14:50.642 Active Namespaces 00:14:50.642 ================= 00:14:50.642 Namespace ID:1 00:14:50.642 Error Recovery Timeout: Unlimited 00:14:50.642 Command Set Identifier: NVM (00h) 00:14:50.642 Deallocate: Supported 00:14:50.642 Deallocated/Unwritten Error: Not Supported 00:14:50.642 Deallocated Read Value: Unknown 00:14:50.642 Deallocate in Write Zeroes: Not Supported 00:14:50.642 Deallocated Guard Field: 0xFFFF 00:14:50.642 Flush: Supported 00:14:50.642 Reservation: Supported 00:14:50.642 Namespace Sharing Capabilities: Multiple Controllers 00:14:50.642 Size (in LBAs): 131072 (0GiB) 00:14:50.642 Capacity (in LBAs): 131072 (0GiB) 00:14:50.642 Utilization (in LBAs): 131072 (0GiB) 00:14:50.642 NGUID: DA1275700E7D41B5B6B18F60F500D028 00:14:50.642 UUID: da127570-0e7d-41b5-b6b1-8f60f500d028 00:14:50.642 Thin Provisioning: Not Supported 00:14:50.642 Per-NS Atomic Units: Yes 00:14:50.642 Atomic Boundary Size (Normal): 0 00:14:50.642 Atomic Boundary Size (PFail): 0 00:14:50.642 Atomic Boundary Offset: 0 00:14:50.642 Maximum Single Source Range Length: 65535 00:14:50.642 Maximum Copy Length: 65535 00:14:50.642 Maximum Source Range Count: 1 00:14:50.642 NGUID/EUI64 Never Reused: No 00:14:50.642 Namespace Write Protected: No 00:14:50.642 Number of LBA Formats: 1 00:14:50.642 Current LBA Format: LBA Format #00 00:14:50.642 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:50.642 00:14:50.643 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:50.903 [2024-12-06 13:22:37.346128] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.195 Initializing NVMe Controllers 00:14:56.195 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:56.195 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:56.195 Initialization complete. Launching workers. 00:14:56.195 ======================================================== 00:14:56.195 Latency(us) 00:14:56.195 Device Information : IOPS MiB/s Average min max 00:14:56.195 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40005.80 156.27 3199.45 874.53 9739.20 00:14:56.195 ======================================================== 00:14:56.195 Total : 40005.80 156.27 3199.45 874.53 9739.20 00:14:56.195 00:14:56.195 [2024-12-06 13:22:42.366107] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.195 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:56.195 [2024-12-06 13:22:42.560990] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.484 Initializing NVMe Controllers 00:15:01.484 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:01.484 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:01.484 Initialization complete. Launching workers. 00:15:01.484 ======================================================== 00:15:01.484 Latency(us) 00:15:01.484 Device Information : IOPS MiB/s Average min max 00:15:01.484 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16076.80 62.80 7972.77 5924.31 9945.34 00:15:01.484 ======================================================== 00:15:01.484 Total : 16076.80 62.80 7972.77 5924.31 9945.34 00:15:01.484 00:15:01.484 [2024-12-06 13:22:47.598397] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.484 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:01.484 [2024-12-06 13:22:47.799267] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.768 [2024-12-06 13:22:52.881739] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.768 Initializing NVMe Controllers 00:15:06.768 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.768 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.768 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:06.768 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:06.768 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:06.768 Initialization complete. Launching workers. 00:15:06.768 Starting thread on core 2 00:15:06.768 Starting thread on core 3 00:15:06.768 Starting thread on core 1 00:15:06.768 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:06.768 [2024-12-06 13:22:53.140797] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:10.057 [2024-12-06 13:22:56.196957] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:10.057 Initializing NVMe Controllers 00:15:10.057 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.057 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:10.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:10.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:10.057 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:10.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:10.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:10.057 Initialization complete. Launching workers. 00:15:10.057 Starting thread on core 1 with urgent priority queue 00:15:10.057 Starting thread on core 2 with urgent priority queue 00:15:10.057 Starting thread on core 3 with urgent priority queue 00:15:10.057 Starting thread on core 0 with urgent priority queue 00:15:10.057 SPDK bdev Controller (SPDK1 ) core 0: 8994.00 IO/s 11.12 secs/100000 ios 00:15:10.057 SPDK bdev Controller (SPDK1 ) core 1: 14505.33 IO/s 6.89 secs/100000 ios 00:15:10.057 SPDK bdev Controller (SPDK1 ) core 2: 8959.33 IO/s 11.16 secs/100000 ios 00:15:10.057 SPDK bdev Controller (SPDK1 ) core 3: 16693.67 IO/s 5.99 secs/100000 ios 00:15:10.057 ======================================================== 00:15:10.057 00:15:10.057 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:10.057 [2024-12-06 13:22:56.441863] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:10.057 Initializing NVMe Controllers 00:15:10.057 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.057 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.057 Namespace ID: 1 size: 0GB 00:15:10.057 Initialization complete. 00:15:10.057 INFO: using host memory buffer for IO 00:15:10.057 Hello world! 00:15:10.057 [2024-12-06 13:22:56.478086] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:10.057 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:10.316 [2024-12-06 13:22:56.714818] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.275 Initializing NVMe Controllers 00:15:11.275 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.275 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.275 Initialization complete. Launching workers. 00:15:11.275 submit (in ns) avg, min, max = 6572.2, 2819.2, 3998669.2 00:15:11.275 complete (in ns) avg, min, max = 17720.1, 1632.5, 4033705.8 00:15:11.275 00:15:11.275 Submit histogram 00:15:11.275 ================ 00:15:11.275 Range in us Cumulative Count 00:15:11.275 2.813 - 2.827: 0.1953% ( 39) 00:15:11.275 2.827 - 2.840: 0.8613% ( 133) 00:15:11.275 2.840 - 2.853: 2.4989% ( 327) 00:15:11.275 2.853 - 2.867: 7.1861% ( 936) 00:15:11.275 2.867 - 2.880: 12.5244% ( 1066) 00:15:11.275 2.880 - 2.893: 19.1547% ( 1324) 00:15:11.275 2.893 - 2.907: 25.8401% ( 1335) 00:15:11.275 2.907 - 2.920: 30.6725% ( 965) 00:15:11.275 2.920 - 2.933: 36.4765% ( 1159) 00:15:11.275 2.933 - 2.947: 42.1153% ( 1126) 00:15:11.275 2.947 - 2.960: 47.1681% ( 1009) 00:15:11.275 2.960 - 2.973: 52.7217% ( 1109) 00:15:11.275 2.973 - 2.987: 60.2384% ( 1501) 00:15:11.275 2.987 - 3.000: 68.6164% ( 1673) 00:15:11.275 3.000 - 3.013: 77.5853% ( 1791) 00:15:11.275 3.013 - 3.027: 85.0168% ( 1484) 00:15:11.275 3.027 - 3.040: 90.8308% ( 1161) 00:15:11.275 3.040 - 3.053: 94.8670% ( 806) 00:15:11.275 3.053 - 3.067: 97.0304% ( 432) 00:15:11.275 3.067 - 3.080: 98.1471% ( 223) 00:15:11.275 3.080 - 3.093: 98.6679% ( 104) 00:15:11.275 3.093 - 3.107: 99.0385% ( 74) 00:15:11.275 3.107 - 3.120: 99.2288% ( 38) 00:15:11.275 3.120 - 3.133: 99.3991% ( 34) 00:15:11.275 3.133 - 3.147: 99.4892% ( 18) 00:15:11.275 3.147 - 3.160: 99.5142% ( 5) 00:15:11.275 3.160 - 3.173: 99.5243% ( 2) 00:15:11.275 3.173 - 3.187: 99.5293% ( 1) 00:15:11.275 3.213 - 3.227: 99.5343% ( 1) 00:15:11.275 3.253 - 3.267: 99.5393% ( 1) 00:15:11.275 3.547 - 3.573: 99.5443% ( 1) 00:15:11.275 3.653 - 3.680: 99.5493% ( 1) 00:15:11.275 4.347 - 4.373: 99.5543% ( 1) 00:15:11.275 4.480 - 4.507: 99.5593% ( 1) 00:15:11.275 4.507 - 4.533: 99.5643% ( 1) 00:15:11.275 4.747 - 4.773: 99.5693% ( 1) 00:15:11.275 4.773 - 4.800: 99.5743% ( 1) 00:15:11.275 4.800 - 4.827: 99.5793% ( 1) 00:15:11.275 4.880 - 4.907: 99.5844% ( 1) 00:15:11.275 4.933 - 4.960: 99.5944% ( 2) 00:15:11.275 4.960 - 4.987: 99.5994% ( 1) 00:15:11.275 4.987 - 5.013: 99.6094% ( 2) 00:15:11.275 5.013 - 5.040: 99.6244% ( 3) 00:15:11.275 5.040 - 5.067: 99.6294% ( 1) 00:15:11.275 5.067 - 5.093: 99.6344% ( 1) 00:15:11.275 5.093 - 5.120: 99.6444% ( 2) 00:15:11.275 5.147 - 5.173: 99.6545% ( 2) 00:15:11.275 5.280 - 5.307: 99.6595% ( 1) 00:15:11.275 5.307 - 5.333: 99.6645% ( 1) 00:15:11.275 5.440 - 5.467: 99.6695% ( 1) 00:15:11.275 5.467 - 5.493: 99.6745% ( 1) 00:15:11.275 5.493 - 5.520: 99.6795% ( 1) 00:15:11.275 5.520 - 5.547: 99.6845% ( 1) 00:15:11.275 5.547 - 5.573: 99.6995% ( 3) 00:15:11.275 5.573 - 5.600: 99.7095% ( 2) 00:15:11.275 5.600 - 5.627: 99.7146% ( 1) 00:15:11.275 5.627 - 5.653: 99.7196% ( 1) 00:15:11.275 5.680 - 5.707: 99.7296% ( 2) 00:15:11.275 5.760 - 5.787: 99.7346% ( 1) 00:15:11.275 5.813 - 5.840: 99.7396% ( 1) 00:15:11.275 5.893 - 5.920: 99.7496% ( 2) 00:15:11.275 5.947 - 5.973: 99.7546% ( 1) 00:15:11.275 6.053 - 6.080: 99.7646% ( 2) 00:15:11.275 6.080 - 6.107: 99.7696% ( 1) 00:15:11.275 6.133 - 6.160: 99.7747% ( 1) 00:15:11.275 6.160 - 6.187: 99.7897% ( 3) 00:15:11.275 6.213 - 6.240: 99.7947% ( 1) 00:15:11.275 6.293 - 6.320: 99.8047% ( 2) 00:15:11.275 6.320 - 6.347: 99.8097% ( 1) 00:15:11.275 6.373 - 6.400: 99.8147% ( 1) 00:15:11.275 6.400 - 6.427: 99.8347% ( 4) 00:15:11.275 6.453 - 6.480: 99.8398% ( 1) 00:15:11.275 6.507 - 6.533: 99.8448% ( 1) 00:15:11.275 6.533 - 6.560: 99.8498% ( 1) 00:15:11.275 6.613 - 6.640: 99.8548% ( 1) 00:15:11.275 6.747 - 6.773: 99.8598% ( 1) 00:15:11.275 6.800 - 6.827: 99.8648% ( 1) 00:15:11.275 6.933 - 6.987: 99.8748% ( 2) 00:15:11.275 6.987 - 7.040: 99.8798% ( 1) 00:15:11.275 [2024-12-06 13:22:57.735558] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.275 7.147 - 7.200: 99.8848% ( 1) 00:15:11.275 8.373 - 8.427: 99.8898% ( 1) 00:15:11.275 9.227 - 9.280: 99.8948% ( 1) 00:15:11.275 11.093 - 11.147: 99.8998% ( 1) 00:15:11.275 13.653 - 13.760: 99.9049% ( 1) 00:15:11.275 33.493 - 33.707: 99.9099% ( 1) 00:15:11.275 3986.773 - 4014.080: 100.0000% ( 18) 00:15:11.275 00:15:11.275 Complete histogram 00:15:11.275 ================== 00:15:11.275 Range in us Cumulative Count 00:15:11.275 1.627 - 1.633: 0.0050% ( 1) 00:15:11.275 1.640 - 1.647: 0.6009% ( 119) 00:15:11.275 1.647 - 1.653: 0.9014% ( 60) 00:15:11.275 1.653 - 1.660: 0.9214% ( 4) 00:15:11.275 1.660 - 1.667: 1.0667% ( 29) 00:15:11.275 1.667 - 1.673: 1.1618% ( 19) 00:15:11.275 1.673 - 1.680: 1.1969% ( 7) 00:15:11.275 1.680 - 1.687: 1.2119% ( 3) 00:15:11.275 1.687 - 1.693: 1.2269% ( 3) 00:15:11.275 1.693 - 1.700: 1.2419% ( 3) 00:15:11.275 1.700 - 1.707: 1.2970% ( 11) 00:15:11.275 1.707 - 1.720: 32.3802% ( 6207) 00:15:11.275 1.720 - 1.733: 48.7506% ( 3269) 00:15:11.275 1.733 - 1.747: 75.0213% ( 5246) 00:15:11.275 1.747 - 1.760: 82.5029% ( 1494) 00:15:11.275 1.760 - 1.773: 84.6712% ( 433) 00:15:11.275 1.773 - 1.787: 88.3970% ( 744) 00:15:11.275 1.787 - 1.800: 92.7438% ( 868) 00:15:11.275 1.800 - 1.813: 96.5146% ( 753) 00:15:11.275 1.813 - 1.827: 98.5427% ( 405) 00:15:11.275 1.827 - 1.840: 99.2538% ( 142) 00:15:11.275 1.840 - 1.853: 99.4191% ( 33) 00:15:11.275 1.853 - 1.867: 99.4542% ( 7) 00:15:11.275 1.867 - 1.880: 99.4592% ( 1) 00:15:11.275 1.893 - 1.907: 99.4642% ( 1) 00:15:11.275 2.040 - 2.053: 99.4692% ( 1) 00:15:11.275 3.440 - 3.467: 99.4742% ( 1) 00:15:11.275 3.547 - 3.573: 99.4792% ( 1) 00:15:11.275 3.680 - 3.707: 99.4842% ( 1) 00:15:11.275 3.787 - 3.813: 99.4892% ( 1) 00:15:11.275 3.840 - 3.867: 99.4942% ( 1) 00:15:11.275 3.947 - 3.973: 99.4992% ( 1) 00:15:11.275 4.080 - 4.107: 99.5042% ( 1) 00:15:11.275 4.133 - 4.160: 99.5092% ( 1) 00:15:11.275 4.187 - 4.213: 99.5142% ( 1) 00:15:11.275 4.293 - 4.320: 99.5193% ( 1) 00:15:11.276 4.400 - 4.427: 99.5293% ( 2) 00:15:11.276 4.427 - 4.453: 99.5343% ( 1) 00:15:11.276 4.453 - 4.480: 99.5393% ( 1) 00:15:11.276 4.587 - 4.613: 99.5443% ( 1) 00:15:11.276 4.640 - 4.667: 99.5543% ( 2) 00:15:11.276 4.720 - 4.747: 99.5593% ( 1) 00:15:11.276 4.853 - 4.880: 99.5693% ( 2) 00:15:11.276 5.067 - 5.093: 99.5743% ( 1) 00:15:11.276 5.120 - 5.147: 99.5793% ( 1) 00:15:11.276 5.173 - 5.200: 99.5894% ( 2) 00:15:11.276 5.387 - 5.413: 99.5944% ( 1) 00:15:11.276 38.187 - 38.400: 99.5994% ( 1) 00:15:11.276 3426.987 - 3440.640: 99.6044% ( 1) 00:15:11.276 3986.773 - 4014.080: 99.9950% ( 78) 00:15:11.276 4014.080 - 4041.387: 100.0000% ( 1) 00:15:11.276 00:15:11.276 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:11.276 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:11.276 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:11.276 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:11.276 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.276 [ 00:15:11.276 { 00:15:11.276 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:11.276 "subtype": "Discovery", 00:15:11.276 "listen_addresses": [], 00:15:11.276 "allow_any_host": true, 00:15:11.276 "hosts": [] 00:15:11.276 }, 00:15:11.276 { 00:15:11.276 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:11.276 "subtype": "NVMe", 00:15:11.276 "listen_addresses": [ 00:15:11.276 { 00:15:11.276 "trtype": "VFIOUSER", 00:15:11.276 "adrfam": "IPv4", 00:15:11.276 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:11.276 "trsvcid": "0" 00:15:11.276 } 00:15:11.276 ], 00:15:11.276 "allow_any_host": true, 00:15:11.276 "hosts": [], 00:15:11.276 "serial_number": "SPDK1", 00:15:11.276 "model_number": "SPDK bdev Controller", 00:15:11.276 "max_namespaces": 32, 00:15:11.276 "min_cntlid": 1, 00:15:11.276 "max_cntlid": 65519, 00:15:11.276 "namespaces": [ 00:15:11.276 { 00:15:11.276 "nsid": 1, 00:15:11.276 "bdev_name": "Malloc1", 00:15:11.276 "name": "Malloc1", 00:15:11.276 "nguid": "DA1275700E7D41B5B6B18F60F500D028", 00:15:11.276 "uuid": "da127570-0e7d-41b5-b6b1-8f60f500d028" 00:15:11.276 } 00:15:11.276 ] 00:15:11.276 }, 00:15:11.276 { 00:15:11.276 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:11.276 "subtype": "NVMe", 00:15:11.276 "listen_addresses": [ 00:15:11.276 { 00:15:11.276 "trtype": "VFIOUSER", 00:15:11.276 "adrfam": "IPv4", 00:15:11.276 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:11.276 "trsvcid": "0" 00:15:11.276 } 00:15:11.276 ], 00:15:11.276 "allow_any_host": true, 00:15:11.276 "hosts": [], 00:15:11.276 "serial_number": "SPDK2", 00:15:11.276 "model_number": "SPDK bdev Controller", 00:15:11.276 "max_namespaces": 32, 00:15:11.276 "min_cntlid": 1, 00:15:11.276 "max_cntlid": 65519, 00:15:11.276 "namespaces": [ 00:15:11.276 { 00:15:11.276 "nsid": 1, 00:15:11.276 "bdev_name": "Malloc2", 00:15:11.276 "name": "Malloc2", 00:15:11.276 "nguid": "D47911E83BF2495588C86F427DB7FE51", 00:15:11.276 "uuid": "d47911e8-3bf2-4955-88c8-6f427db7fe51" 00:15:11.276 } 00:15:11.276 ] 00:15:11.276 } 00:15:11.276 ] 00:15:11.536 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:11.536 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2105107 00:15:11.536 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:11.536 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:11.536 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:11.536 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.536 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.536 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:11.536 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:11.536 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:11.536 [2024-12-06 13:22:58.113793] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.536 Malloc3 00:15:11.536 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:11.797 [2024-12-06 13:22:58.308075] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.797 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.797 Asynchronous Event Request test 00:15:11.797 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.797 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.797 Registering asynchronous event callbacks... 00:15:11.797 Starting namespace attribute notice tests for all controllers... 00:15:11.797 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:11.797 aer_cb - Changed Namespace 00:15:11.797 Cleaning up... 00:15:12.058 [ 00:15:12.058 { 00:15:12.058 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:12.058 "subtype": "Discovery", 00:15:12.058 "listen_addresses": [], 00:15:12.058 "allow_any_host": true, 00:15:12.058 "hosts": [] 00:15:12.058 }, 00:15:12.058 { 00:15:12.058 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:12.058 "subtype": "NVMe", 00:15:12.058 "listen_addresses": [ 00:15:12.058 { 00:15:12.058 "trtype": "VFIOUSER", 00:15:12.058 "adrfam": "IPv4", 00:15:12.058 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:12.058 "trsvcid": "0" 00:15:12.058 } 00:15:12.058 ], 00:15:12.058 "allow_any_host": true, 00:15:12.058 "hosts": [], 00:15:12.058 "serial_number": "SPDK1", 00:15:12.058 "model_number": "SPDK bdev Controller", 00:15:12.058 "max_namespaces": 32, 00:15:12.058 "min_cntlid": 1, 00:15:12.058 "max_cntlid": 65519, 00:15:12.058 "namespaces": [ 00:15:12.058 { 00:15:12.058 "nsid": 1, 00:15:12.058 "bdev_name": "Malloc1", 00:15:12.058 "name": "Malloc1", 00:15:12.058 "nguid": "DA1275700E7D41B5B6B18F60F500D028", 00:15:12.058 "uuid": "da127570-0e7d-41b5-b6b1-8f60f500d028" 00:15:12.058 }, 00:15:12.058 { 00:15:12.058 "nsid": 2, 00:15:12.058 "bdev_name": "Malloc3", 00:15:12.058 "name": "Malloc3", 00:15:12.058 "nguid": "37A7FCAA3A45478FB1607234ED74A469", 00:15:12.058 "uuid": "37a7fcaa-3a45-478f-b160-7234ed74a469" 00:15:12.058 } 00:15:12.058 ] 00:15:12.058 }, 00:15:12.058 { 00:15:12.058 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:12.058 "subtype": "NVMe", 00:15:12.058 "listen_addresses": [ 00:15:12.058 { 00:15:12.058 "trtype": "VFIOUSER", 00:15:12.058 "adrfam": "IPv4", 00:15:12.058 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:12.058 "trsvcid": "0" 00:15:12.058 } 00:15:12.058 ], 00:15:12.058 "allow_any_host": true, 00:15:12.058 "hosts": [], 00:15:12.058 "serial_number": "SPDK2", 00:15:12.058 "model_number": "SPDK bdev Controller", 00:15:12.058 "max_namespaces": 32, 00:15:12.058 "min_cntlid": 1, 00:15:12.058 "max_cntlid": 65519, 00:15:12.058 "namespaces": [ 00:15:12.058 { 00:15:12.058 "nsid": 1, 00:15:12.058 "bdev_name": "Malloc2", 00:15:12.058 "name": "Malloc2", 00:15:12.058 "nguid": "D47911E83BF2495588C86F427DB7FE51", 00:15:12.058 "uuid": "d47911e8-3bf2-4955-88c8-6f427db7fe51" 00:15:12.058 } 00:15:12.058 ] 00:15:12.058 } 00:15:12.058 ] 00:15:12.058 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2105107 00:15:12.058 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:12.058 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:12.058 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:12.058 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:12.058 [2024-12-06 13:22:58.543247] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:15:12.059 [2024-12-06 13:22:58.543318] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105116 ] 00:15:12.059 [2024-12-06 13:22:58.585637] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:12.059 [2024-12-06 13:22:58.587821] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:12.059 [2024-12-06 13:22:58.587840] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbb48c2d000 00:15:12.059 [2024-12-06 13:22:58.588823] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.059 [2024-12-06 13:22:58.589829] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.059 [2024-12-06 13:22:58.590831] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.059 [2024-12-06 13:22:58.591834] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:12.059 [2024-12-06 13:22:58.592836] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:12.059 [2024-12-06 13:22:58.593841] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.059 [2024-12-06 13:22:58.594848] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:12.059 [2024-12-06 13:22:58.595854] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.059 [2024-12-06 13:22:58.596856] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:12.059 [2024-12-06 13:22:58.596863] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbb48c22000 00:15:12.059 [2024-12-06 13:22:58.597773] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:12.059 [2024-12-06 13:22:58.610729] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:12.059 [2024-12-06 13:22:58.610746] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:12.059 [2024-12-06 13:22:58.615817] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:12.059 [2024-12-06 13:22:58.615848] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:12.059 [2024-12-06 13:22:58.615909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:12.059 [2024-12-06 13:22:58.615918] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:12.059 [2024-12-06 13:22:58.615921] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:12.059 [2024-12-06 13:22:58.616823] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:12.059 [2024-12-06 13:22:58.616831] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:12.059 [2024-12-06 13:22:58.616836] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:12.059 [2024-12-06 13:22:58.617822] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:12.059 [2024-12-06 13:22:58.617829] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:12.059 [2024-12-06 13:22:58.617834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:12.059 [2024-12-06 13:22:58.618831] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:12.059 [2024-12-06 13:22:58.618838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:12.059 [2024-12-06 13:22:58.619841] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:12.059 [2024-12-06 13:22:58.619847] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:12.059 [2024-12-06 13:22:58.619851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:12.059 [2024-12-06 13:22:58.619856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:12.059 [2024-12-06 13:22:58.619962] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:12.059 [2024-12-06 13:22:58.619965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:12.059 [2024-12-06 13:22:58.619970] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:12.059 [2024-12-06 13:22:58.620852] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:12.059 [2024-12-06 13:22:58.621858] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:12.059 [2024-12-06 13:22:58.622864] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:12.059 [2024-12-06 13:22:58.623871] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:12.059 [2024-12-06 13:22:58.623900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:12.059 [2024-12-06 13:22:58.624884] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:12.059 [2024-12-06 13:22:58.624891] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:12.059 [2024-12-06 13:22:58.624895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:12.059 [2024-12-06 13:22:58.624909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:12.059 [2024-12-06 13:22:58.624915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:12.059 [2024-12-06 13:22:58.624926] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:12.059 [2024-12-06 13:22:58.624930] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.059 [2024-12-06 13:22:58.624932] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.059 [2024-12-06 13:22:58.624942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.059 [2024-12-06 13:22:58.631461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:12.059 [2024-12-06 13:22:58.631469] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:12.059 [2024-12-06 13:22:58.631474] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:12.059 [2024-12-06 13:22:58.631478] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:12.059 [2024-12-06 13:22:58.631481] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:12.059 [2024-12-06 13:22:58.631484] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:12.059 [2024-12-06 13:22:58.631488] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:12.059 [2024-12-06 13:22:58.631491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:12.059 [2024-12-06 13:22:58.631497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:12.059 [2024-12-06 13:22:58.631504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:12.059 [2024-12-06 13:22:58.639459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:12.059 [2024-12-06 13:22:58.639469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.059 [2024-12-06 13:22:58.639475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.059 [2024-12-06 13:22:58.639481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.059 [2024-12-06 13:22:58.639487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.059 [2024-12-06 13:22:58.639490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:12.059 [2024-12-06 13:22:58.639497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:12.059 [2024-12-06 13:22:58.639504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:12.059 [2024-12-06 13:22:58.647458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:12.059 [2024-12-06 13:22:58.647464] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:12.059 [2024-12-06 13:22:58.647468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:12.059 [2024-12-06 13:22:58.647473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:12.059 [2024-12-06 13:22:58.647477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:12.059 [2024-12-06 13:22:58.647484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:12.059 [2024-12-06 13:22:58.655459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:12.059 [2024-12-06 13:22:58.655506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:12.059 [2024-12-06 13:22:58.655512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:12.060 [2024-12-06 13:22:58.655517] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:12.060 [2024-12-06 13:22:58.655520] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:12.060 [2024-12-06 13:22:58.655523] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.060 [2024-12-06 13:22:58.655527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:12.060 [2024-12-06 13:22:58.663459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:12.060 [2024-12-06 13:22:58.663470] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:12.060 [2024-12-06 13:22:58.663479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:12.060 [2024-12-06 13:22:58.663485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:12.060 [2024-12-06 13:22:58.663491] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:12.060 [2024-12-06 13:22:58.663494] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.060 [2024-12-06 13:22:58.663497] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.060 [2024-12-06 13:22:58.663501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.060 [2024-12-06 13:22:58.671460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:12.060 [2024-12-06 13:22:58.671472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:12.060 [2024-12-06 13:22:58.671478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:12.060 [2024-12-06 13:22:58.671483] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:12.060 [2024-12-06 13:22:58.671486] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.060 [2024-12-06 13:22:58.671488] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.060 [2024-12-06 13:22:58.671493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.060 [2024-12-06 13:22:58.679459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:12.060 [2024-12-06 13:22:58.679467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:12.060 [2024-12-06 13:22:58.679472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:12.060 [2024-12-06 13:22:58.679478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:12.060 [2024-12-06 13:22:58.679483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:12.060 [2024-12-06 13:22:58.679487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:12.060 [2024-12-06 13:22:58.679491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:12.060 [2024-12-06 13:22:58.679494] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:12.060 [2024-12-06 13:22:58.679498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:12.060 [2024-12-06 13:22:58.679501] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:12.060 [2024-12-06 13:22:58.679514] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:12.060 [2024-12-06 13:22:58.687459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:12.060 [2024-12-06 13:22:58.687471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:12.060 [2024-12-06 13:22:58.695459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:12.060 [2024-12-06 13:22:58.695470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:12.060 [2024-12-06 13:22:58.703458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:12.060 [2024-12-06 13:22:58.703468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:12.060 [2024-12-06 13:22:58.711458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:12.060 [2024-12-06 13:22:58.711469] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:12.060 [2024-12-06 13:22:58.711473] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:12.060 [2024-12-06 13:22:58.711475] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:12.060 [2024-12-06 13:22:58.711478] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:12.060 [2024-12-06 13:22:58.711480] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:12.060 [2024-12-06 13:22:58.711485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:12.060 [2024-12-06 13:22:58.711491] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:12.060 [2024-12-06 13:22:58.711494] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:12.060 [2024-12-06 13:22:58.711496] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.060 [2024-12-06 13:22:58.711500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:12.060 [2024-12-06 13:22:58.711505] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:12.060 [2024-12-06 13:22:58.711508] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.060 [2024-12-06 13:22:58.711511] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.060 [2024-12-06 13:22:58.711515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.060 [2024-12-06 13:22:58.711521] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:12.060 [2024-12-06 13:22:58.711524] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:12.060 [2024-12-06 13:22:58.711526] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.060 [2024-12-06 13:22:58.711530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:12.321 [2024-12-06 13:22:58.719459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:12.321 [2024-12-06 13:22:58.719471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:12.321 [2024-12-06 13:22:58.719478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:12.321 [2024-12-06 13:22:58.719483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:12.321 ===================================================== 00:15:12.321 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:12.321 ===================================================== 00:15:12.321 Controller Capabilities/Features 00:15:12.321 ================================ 00:15:12.321 Vendor ID: 4e58 00:15:12.321 Subsystem Vendor ID: 4e58 00:15:12.321 Serial Number: SPDK2 00:15:12.321 Model Number: SPDK bdev Controller 00:15:12.321 Firmware Version: 25.01 00:15:12.321 Recommended Arb Burst: 6 00:15:12.321 IEEE OUI Identifier: 8d 6b 50 00:15:12.321 Multi-path I/O 00:15:12.321 May have multiple subsystem ports: Yes 00:15:12.321 May have multiple controllers: Yes 00:15:12.321 Associated with SR-IOV VF: No 00:15:12.321 Max Data Transfer Size: 131072 00:15:12.321 Max Number of Namespaces: 32 00:15:12.321 Max Number of I/O Queues: 127 00:15:12.321 NVMe Specification Version (VS): 1.3 00:15:12.321 NVMe Specification Version (Identify): 1.3 00:15:12.321 Maximum Queue Entries: 256 00:15:12.321 Contiguous Queues Required: Yes 00:15:12.321 Arbitration Mechanisms Supported 00:15:12.321 Weighted Round Robin: Not Supported 00:15:12.321 Vendor Specific: Not Supported 00:15:12.321 Reset Timeout: 15000 ms 00:15:12.321 Doorbell Stride: 4 bytes 00:15:12.321 NVM Subsystem Reset: Not Supported 00:15:12.321 Command Sets Supported 00:15:12.322 NVM Command Set: Supported 00:15:12.322 Boot Partition: Not Supported 00:15:12.322 Memory Page Size Minimum: 4096 bytes 00:15:12.322 Memory Page Size Maximum: 4096 bytes 00:15:12.322 Persistent Memory Region: Not Supported 00:15:12.322 Optional Asynchronous Events Supported 00:15:12.322 Namespace Attribute Notices: Supported 00:15:12.322 Firmware Activation Notices: Not Supported 00:15:12.322 ANA Change Notices: Not Supported 00:15:12.322 PLE Aggregate Log Change Notices: Not Supported 00:15:12.322 LBA Status Info Alert Notices: Not Supported 00:15:12.322 EGE Aggregate Log Change Notices: Not Supported 00:15:12.322 Normal NVM Subsystem Shutdown event: Not Supported 00:15:12.322 Zone Descriptor Change Notices: Not Supported 00:15:12.322 Discovery Log Change Notices: Not Supported 00:15:12.322 Controller Attributes 00:15:12.322 128-bit Host Identifier: Supported 00:15:12.322 Non-Operational Permissive Mode: Not Supported 00:15:12.322 NVM Sets: Not Supported 00:15:12.322 Read Recovery Levels: Not Supported 00:15:12.322 Endurance Groups: Not Supported 00:15:12.322 Predictable Latency Mode: Not Supported 00:15:12.322 Traffic Based Keep ALive: Not Supported 00:15:12.322 Namespace Granularity: Not Supported 00:15:12.322 SQ Associations: Not Supported 00:15:12.322 UUID List: Not Supported 00:15:12.322 Multi-Domain Subsystem: Not Supported 00:15:12.322 Fixed Capacity Management: Not Supported 00:15:12.322 Variable Capacity Management: Not Supported 00:15:12.322 Delete Endurance Group: Not Supported 00:15:12.322 Delete NVM Set: Not Supported 00:15:12.322 Extended LBA Formats Supported: Not Supported 00:15:12.322 Flexible Data Placement Supported: Not Supported 00:15:12.322 00:15:12.322 Controller Memory Buffer Support 00:15:12.322 ================================ 00:15:12.322 Supported: No 00:15:12.322 00:15:12.322 Persistent Memory Region Support 00:15:12.322 ================================ 00:15:12.322 Supported: No 00:15:12.322 00:15:12.322 Admin Command Set Attributes 00:15:12.322 ============================ 00:15:12.322 Security Send/Receive: Not Supported 00:15:12.322 Format NVM: Not Supported 00:15:12.322 Firmware Activate/Download: Not Supported 00:15:12.322 Namespace Management: Not Supported 00:15:12.322 Device Self-Test: Not Supported 00:15:12.322 Directives: Not Supported 00:15:12.322 NVMe-MI: Not Supported 00:15:12.322 Virtualization Management: Not Supported 00:15:12.322 Doorbell Buffer Config: Not Supported 00:15:12.322 Get LBA Status Capability: Not Supported 00:15:12.322 Command & Feature Lockdown Capability: Not Supported 00:15:12.322 Abort Command Limit: 4 00:15:12.322 Async Event Request Limit: 4 00:15:12.322 Number of Firmware Slots: N/A 00:15:12.322 Firmware Slot 1 Read-Only: N/A 00:15:12.322 Firmware Activation Without Reset: N/A 00:15:12.322 Multiple Update Detection Support: N/A 00:15:12.322 Firmware Update Granularity: No Information Provided 00:15:12.322 Per-Namespace SMART Log: No 00:15:12.322 Asymmetric Namespace Access Log Page: Not Supported 00:15:12.322 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:12.322 Command Effects Log Page: Supported 00:15:12.322 Get Log Page Extended Data: Supported 00:15:12.322 Telemetry Log Pages: Not Supported 00:15:12.322 Persistent Event Log Pages: Not Supported 00:15:12.322 Supported Log Pages Log Page: May Support 00:15:12.322 Commands Supported & Effects Log Page: Not Supported 00:15:12.322 Feature Identifiers & Effects Log Page:May Support 00:15:12.322 NVMe-MI Commands & Effects Log Page: May Support 00:15:12.322 Data Area 4 for Telemetry Log: Not Supported 00:15:12.322 Error Log Page Entries Supported: 128 00:15:12.322 Keep Alive: Supported 00:15:12.322 Keep Alive Granularity: 10000 ms 00:15:12.322 00:15:12.322 NVM Command Set Attributes 00:15:12.322 ========================== 00:15:12.322 Submission Queue Entry Size 00:15:12.322 Max: 64 00:15:12.322 Min: 64 00:15:12.322 Completion Queue Entry Size 00:15:12.322 Max: 16 00:15:12.322 Min: 16 00:15:12.322 Number of Namespaces: 32 00:15:12.322 Compare Command: Supported 00:15:12.322 Write Uncorrectable Command: Not Supported 00:15:12.322 Dataset Management Command: Supported 00:15:12.322 Write Zeroes Command: Supported 00:15:12.322 Set Features Save Field: Not Supported 00:15:12.322 Reservations: Not Supported 00:15:12.322 Timestamp: Not Supported 00:15:12.322 Copy: Supported 00:15:12.322 Volatile Write Cache: Present 00:15:12.322 Atomic Write Unit (Normal): 1 00:15:12.322 Atomic Write Unit (PFail): 1 00:15:12.322 Atomic Compare & Write Unit: 1 00:15:12.322 Fused Compare & Write: Supported 00:15:12.322 Scatter-Gather List 00:15:12.322 SGL Command Set: Supported (Dword aligned) 00:15:12.322 SGL Keyed: Not Supported 00:15:12.322 SGL Bit Bucket Descriptor: Not Supported 00:15:12.322 SGL Metadata Pointer: Not Supported 00:15:12.322 Oversized SGL: Not Supported 00:15:12.322 SGL Metadata Address: Not Supported 00:15:12.322 SGL Offset: Not Supported 00:15:12.322 Transport SGL Data Block: Not Supported 00:15:12.322 Replay Protected Memory Block: Not Supported 00:15:12.322 00:15:12.322 Firmware Slot Information 00:15:12.322 ========================= 00:15:12.322 Active slot: 1 00:15:12.322 Slot 1 Firmware Revision: 25.01 00:15:12.322 00:15:12.322 00:15:12.322 Commands Supported and Effects 00:15:12.322 ============================== 00:15:12.322 Admin Commands 00:15:12.322 -------------- 00:15:12.322 Get Log Page (02h): Supported 00:15:12.322 Identify (06h): Supported 00:15:12.322 Abort (08h): Supported 00:15:12.322 Set Features (09h): Supported 00:15:12.322 Get Features (0Ah): Supported 00:15:12.322 Asynchronous Event Request (0Ch): Supported 00:15:12.322 Keep Alive (18h): Supported 00:15:12.322 I/O Commands 00:15:12.322 ------------ 00:15:12.322 Flush (00h): Supported LBA-Change 00:15:12.322 Write (01h): Supported LBA-Change 00:15:12.322 Read (02h): Supported 00:15:12.322 Compare (05h): Supported 00:15:12.322 Write Zeroes (08h): Supported LBA-Change 00:15:12.322 Dataset Management (09h): Supported LBA-Change 00:15:12.322 Copy (19h): Supported LBA-Change 00:15:12.322 00:15:12.322 Error Log 00:15:12.322 ========= 00:15:12.322 00:15:12.322 Arbitration 00:15:12.322 =========== 00:15:12.322 Arbitration Burst: 1 00:15:12.322 00:15:12.322 Power Management 00:15:12.322 ================ 00:15:12.322 Number of Power States: 1 00:15:12.322 Current Power State: Power State #0 00:15:12.322 Power State #0: 00:15:12.322 Max Power: 0.00 W 00:15:12.322 Non-Operational State: Operational 00:15:12.322 Entry Latency: Not Reported 00:15:12.322 Exit Latency: Not Reported 00:15:12.322 Relative Read Throughput: 0 00:15:12.322 Relative Read Latency: 0 00:15:12.322 Relative Write Throughput: 0 00:15:12.322 Relative Write Latency: 0 00:15:12.322 Idle Power: Not Reported 00:15:12.322 Active Power: Not Reported 00:15:12.322 Non-Operational Permissive Mode: Not Supported 00:15:12.322 00:15:12.322 Health Information 00:15:12.322 ================== 00:15:12.322 Critical Warnings: 00:15:12.322 Available Spare Space: OK 00:15:12.322 Temperature: OK 00:15:12.322 Device Reliability: OK 00:15:12.322 Read Only: No 00:15:12.322 Volatile Memory Backup: OK 00:15:12.322 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:12.322 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:12.322 Available Spare: 0% 00:15:12.322 Available Sp[2024-12-06 13:22:58.719558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:12.322 [2024-12-06 13:22:58.727461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:12.322 [2024-12-06 13:22:58.727489] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:12.322 [2024-12-06 13:22:58.727497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.322 [2024-12-06 13:22:58.727502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.322 [2024-12-06 13:22:58.727506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.322 [2024-12-06 13:22:58.727511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.322 [2024-12-06 13:22:58.727540] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:12.322 [2024-12-06 13:22:58.727548] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:12.322 [2024-12-06 13:22:58.728549] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.322 [2024-12-06 13:22:58.728586] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:12.322 [2024-12-06 13:22:58.728591] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:12.322 [2024-12-06 13:22:58.729549] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:12.323 [2024-12-06 13:22:58.729559] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:12.323 [2024-12-06 13:22:58.729600] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:12.323 [2024-12-06 13:22:58.730570] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:12.323 are Threshold: 0% 00:15:12.323 Life Percentage Used: 0% 00:15:12.323 Data Units Read: 0 00:15:12.323 Data Units Written: 0 00:15:12.323 Host Read Commands: 0 00:15:12.323 Host Write Commands: 0 00:15:12.323 Controller Busy Time: 0 minutes 00:15:12.323 Power Cycles: 0 00:15:12.323 Power On Hours: 0 hours 00:15:12.323 Unsafe Shutdowns: 0 00:15:12.323 Unrecoverable Media Errors: 0 00:15:12.323 Lifetime Error Log Entries: 0 00:15:12.323 Warning Temperature Time: 0 minutes 00:15:12.323 Critical Temperature Time: 0 minutes 00:15:12.323 00:15:12.323 Number of Queues 00:15:12.323 ================ 00:15:12.323 Number of I/O Submission Queues: 127 00:15:12.323 Number of I/O Completion Queues: 127 00:15:12.323 00:15:12.323 Active Namespaces 00:15:12.323 ================= 00:15:12.323 Namespace ID:1 00:15:12.323 Error Recovery Timeout: Unlimited 00:15:12.323 Command Set Identifier: NVM (00h) 00:15:12.323 Deallocate: Supported 00:15:12.323 Deallocated/Unwritten Error: Not Supported 00:15:12.323 Deallocated Read Value: Unknown 00:15:12.323 Deallocate in Write Zeroes: Not Supported 00:15:12.323 Deallocated Guard Field: 0xFFFF 00:15:12.323 Flush: Supported 00:15:12.323 Reservation: Supported 00:15:12.323 Namespace Sharing Capabilities: Multiple Controllers 00:15:12.323 Size (in LBAs): 131072 (0GiB) 00:15:12.323 Capacity (in LBAs): 131072 (0GiB) 00:15:12.323 Utilization (in LBAs): 131072 (0GiB) 00:15:12.323 NGUID: D47911E83BF2495588C86F427DB7FE51 00:15:12.323 UUID: d47911e8-3bf2-4955-88c8-6f427db7fe51 00:15:12.323 Thin Provisioning: Not Supported 00:15:12.323 Per-NS Atomic Units: Yes 00:15:12.323 Atomic Boundary Size (Normal): 0 00:15:12.323 Atomic Boundary Size (PFail): 0 00:15:12.323 Atomic Boundary Offset: 0 00:15:12.323 Maximum Single Source Range Length: 65535 00:15:12.323 Maximum Copy Length: 65535 00:15:12.323 Maximum Source Range Count: 1 00:15:12.323 NGUID/EUI64 Never Reused: No 00:15:12.323 Namespace Write Protected: No 00:15:12.323 Number of LBA Formats: 1 00:15:12.323 Current LBA Format: LBA Format #00 00:15:12.323 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:12.323 00:15:12.323 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:12.323 [2024-12-06 13:22:58.924203] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.610 Initializing NVMe Controllers 00:15:17.610 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.610 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:17.610 Initialization complete. Launching workers. 00:15:17.610 ======================================================== 00:15:17.610 Latency(us) 00:15:17.610 Device Information : IOPS MiB/s Average min max 00:15:17.610 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39961.73 156.10 3202.74 868.29 8755.01 00:15:17.610 ======================================================== 00:15:17.610 Total : 39961.73 156.10 3202.74 868.29 8755.01 00:15:17.610 00:15:17.610 [2024-12-06 13:23:04.030654] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.610 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:17.610 [2024-12-06 13:23:04.221276] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.888 Initializing NVMe Controllers 00:15:22.888 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:22.888 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:22.888 Initialization complete. Launching workers. 00:15:22.888 ======================================================== 00:15:22.888 Latency(us) 00:15:22.888 Device Information : IOPS MiB/s Average min max 00:15:22.888 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39958.55 156.09 3203.00 861.18 8733.60 00:15:22.888 ======================================================== 00:15:22.888 Total : 39958.55 156.09 3203.00 861.18 8733.60 00:15:22.888 00:15:22.888 [2024-12-06 13:23:09.237922] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.888 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:22.888 [2024-12-06 13:23:09.445157] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.171 [2024-12-06 13:23:14.589533] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.171 Initializing NVMe Controllers 00:15:28.171 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.171 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.171 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:28.171 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:28.171 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:28.171 Initialization complete. Launching workers. 00:15:28.171 Starting thread on core 2 00:15:28.171 Starting thread on core 3 00:15:28.171 Starting thread on core 1 00:15:28.171 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:28.430 [2024-12-06 13:23:14.843888] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.718 [2024-12-06 13:23:17.915476] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.718 Initializing NVMe Controllers 00:15:31.718 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.718 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.718 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:31.718 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:31.718 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:31.718 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:31.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:31.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:31.718 Initialization complete. Launching workers. 00:15:31.718 Starting thread on core 1 with urgent priority queue 00:15:31.718 Starting thread on core 2 with urgent priority queue 00:15:31.718 Starting thread on core 3 with urgent priority queue 00:15:31.718 Starting thread on core 0 with urgent priority queue 00:15:31.718 SPDK bdev Controller (SPDK2 ) core 0: 8266.00 IO/s 12.10 secs/100000 ios 00:15:31.718 SPDK bdev Controller (SPDK2 ) core 1: 6320.00 IO/s 15.82 secs/100000 ios 00:15:31.719 SPDK bdev Controller (SPDK2 ) core 2: 8440.00 IO/s 11.85 secs/100000 ios 00:15:31.719 SPDK bdev Controller (SPDK2 ) core 3: 7713.67 IO/s 12.96 secs/100000 ios 00:15:31.719 ======================================================== 00:15:31.719 00:15:31.719 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:31.719 [2024-12-06 13:23:18.151824] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.719 Initializing NVMe Controllers 00:15:31.719 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.719 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.719 Namespace ID: 1 size: 0GB 00:15:31.719 Initialization complete. 00:15:31.719 INFO: using host memory buffer for IO 00:15:31.719 Hello world! 00:15:31.719 [2024-12-06 13:23:18.161888] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.719 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:31.980 [2024-12-06 13:23:18.398833] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.138 Initializing NVMe Controllers 00:15:33.138 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.138 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.138 Initialization complete. Launching workers. 00:15:33.138 submit (in ns) avg, min, max = 6135.3, 2812.5, 3997617.5 00:15:33.138 complete (in ns) avg, min, max = 17174.3, 1640.0, 3996992.5 00:15:33.138 00:15:33.138 Submit histogram 00:15:33.138 ================ 00:15:33.138 Range in us Cumulative Count 00:15:33.138 2.800 - 2.813: 0.0050% ( 1) 00:15:33.138 2.813 - 2.827: 0.2580% ( 51) 00:15:33.138 2.827 - 2.840: 1.4934% ( 249) 00:15:33.138 2.840 - 2.853: 3.6021% ( 425) 00:15:33.138 2.853 - 2.867: 8.0377% ( 894) 00:15:33.138 2.867 - 2.880: 13.1878% ( 1038) 00:15:33.138 2.880 - 2.893: 19.0176% ( 1175) 00:15:33.138 2.893 - 2.907: 23.7609% ( 956) 00:15:33.138 2.907 - 2.920: 29.7544% ( 1208) 00:15:33.138 2.920 - 2.933: 35.8472% ( 1228) 00:15:33.138 2.933 - 2.947: 40.8236% ( 1003) 00:15:33.138 2.947 - 2.960: 46.4202% ( 1128) 00:15:33.138 2.960 - 2.973: 52.3543% ( 1196) 00:15:33.138 2.973 - 2.987: 59.0672% ( 1353) 00:15:33.138 2.987 - 3.000: 67.1694% ( 1633) 00:15:33.138 3.000 - 3.013: 76.3235% ( 1845) 00:15:33.138 3.013 - 3.027: 83.7360% ( 1494) 00:15:33.138 3.027 - 3.040: 90.5185% ( 1367) 00:15:33.138 3.040 - 3.053: 95.0236% ( 908) 00:15:33.138 3.053 - 3.067: 97.7078% ( 541) 00:15:33.138 3.067 - 3.080: 98.9779% ( 256) 00:15:33.138 3.080 - 3.093: 99.3649% ( 78) 00:15:33.138 3.093 - 3.107: 99.5584% ( 39) 00:15:33.138 3.107 - 3.120: 99.6130% ( 11) 00:15:33.138 3.120 - 3.133: 99.6428% ( 6) 00:15:33.138 3.133 - 3.147: 99.6676% ( 5) 00:15:33.138 3.147 - 3.160: 99.6825% ( 3) 00:15:33.138 3.200 - 3.213: 99.6874% ( 1) 00:15:33.138 3.373 - 3.387: 99.6924% ( 1) 00:15:33.138 3.400 - 3.413: 99.6973% ( 1) 00:15:33.138 3.440 - 3.467: 99.7023% ( 1) 00:15:33.138 3.600 - 3.627: 99.7073% ( 1) 00:15:33.138 3.813 - 3.840: 99.7122% ( 1) 00:15:33.138 3.973 - 4.000: 99.7172% ( 1) 00:15:33.138 4.453 - 4.480: 99.7271% ( 2) 00:15:33.138 4.533 - 4.560: 99.7321% ( 1) 00:15:33.138 4.560 - 4.587: 99.7370% ( 1) 00:15:33.138 4.613 - 4.640: 99.7420% ( 1) 00:15:33.138 4.773 - 4.800: 99.7519% ( 2) 00:15:33.138 4.800 - 4.827: 99.7618% ( 2) 00:15:33.138 4.827 - 4.853: 99.7668% ( 1) 00:15:33.138 4.880 - 4.907: 99.7718% ( 1) 00:15:33.138 4.907 - 4.933: 99.7767% ( 1) 00:15:33.138 4.933 - 4.960: 99.7867% ( 2) 00:15:33.138 4.987 - 5.013: 99.7966% ( 2) 00:15:33.138 5.040 - 5.067: 99.8015% ( 1) 00:15:33.138 5.067 - 5.093: 99.8164% ( 3) 00:15:33.138 5.093 - 5.120: 99.8214% ( 1) 00:15:33.138 5.200 - 5.227: 99.8263% ( 1) 00:15:33.138 5.227 - 5.253: 99.8363% ( 2) 00:15:33.138 5.280 - 5.307: 99.8412% ( 1) 00:15:33.138 5.333 - 5.360: 99.8462% ( 1) 00:15:33.138 5.387 - 5.413: 99.8561% ( 2) 00:15:33.138 5.440 - 5.467: 99.8611% ( 1) 00:15:33.138 5.520 - 5.547: 99.8660% ( 1) 00:15:33.138 5.627 - 5.653: 99.8710% ( 1) 00:15:33.138 5.653 - 5.680: 99.8760% ( 1) 00:15:33.138 5.680 - 5.707: 99.8809% ( 1) 00:15:33.138 5.733 - 5.760: 99.8859% ( 1) 00:15:33.138 5.787 - 5.813: 99.8908% ( 1) 00:15:33.138 5.920 - 5.947: 99.9008% ( 2) 00:15:33.138 6.027 - 6.053: 99.9057% ( 1) 00:15:33.138 6.107 - 6.133: 99.9107% ( 1) 00:15:33.138 6.320 - 6.347: 99.9157% ( 1) 00:15:33.138 7.893 - 7.947: 99.9206% ( 1) 00:15:33.138 3986.773 - 4014.080: 100.0000% ( 16) 00:15:33.138 00:15:33.138 Complete histogram 00:15:33.138 ================== 00:15:33.138 Range in us Cumulative Count 00:15:33.138 1.640 - 1.647: 1.0370% ( 209) 00:15:33.138 1.647 - 1.653: 1.7117% ( 136) 00:15:33.138 1.653 - 1.660: 2.0442% ( 67) 00:15:33.138 1.660 - 1.667: 2.4510% ( 82) 00:15:33.138 1.667 - 1.673: 2.5999% ( 30) 00:15:33.138 1.673 - 1.680: 2.6991% ( 20) 00:15:33.138 1.680 - 1.687: 2.7586% ( 12) 00:15:33.138 1.687 - 1.693: 2.7785% ( 4) 00:15:33.138 1.693 - 1.700: 5.0062% ( 449) 00:15:33.138 1.700 - 1.707: 34.5225% ( 5949) 00:15:33.138 1.707 - 1.720: 62.7834% ( 5696) 00:15:33.138 1.720 - [2024-12-06 13:23:19.494061] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.138 1.733: 87.5614% ( 4994) 00:15:33.138 1.733 - 1.747: 94.4331% ( 1385) 00:15:33.138 1.747 - 1.760: 95.9861% ( 313) 00:15:33.138 1.760 - 1.773: 97.0727% ( 219) 00:15:33.138 1.773 - 1.787: 98.4024% ( 268) 00:15:33.138 1.787 - 1.800: 99.0821% ( 137) 00:15:33.138 1.800 - 1.813: 99.4046% ( 65) 00:15:33.138 1.813 - 1.827: 99.4542% ( 10) 00:15:33.138 1.840 - 1.853: 99.4642% ( 2) 00:15:33.138 1.947 - 1.960: 99.4691% ( 1) 00:15:33.138 1.987 - 2.000: 99.4741% ( 1) 00:15:33.138 3.227 - 3.240: 99.4790% ( 1) 00:15:33.138 3.253 - 3.267: 99.4840% ( 1) 00:15:33.138 3.347 - 3.360: 99.4939% ( 2) 00:15:33.138 3.373 - 3.387: 99.4989% ( 1) 00:15:33.138 3.493 - 3.520: 99.5088% ( 2) 00:15:33.138 3.520 - 3.547: 99.5138% ( 1) 00:15:33.138 3.547 - 3.573: 99.5187% ( 1) 00:15:33.138 3.707 - 3.733: 99.5237% ( 1) 00:15:33.138 3.760 - 3.787: 99.5287% ( 1) 00:15:33.138 3.973 - 4.000: 99.5336% ( 1) 00:15:33.138 4.000 - 4.027: 99.5386% ( 1) 00:15:33.138 4.053 - 4.080: 99.5435% ( 1) 00:15:33.138 4.187 - 4.213: 99.5485% ( 1) 00:15:33.138 4.240 - 4.267: 99.5634% ( 3) 00:15:33.138 4.400 - 4.427: 99.5683% ( 1) 00:15:33.138 4.480 - 4.507: 99.5733% ( 1) 00:15:33.138 4.507 - 4.533: 99.5783% ( 1) 00:15:33.138 4.587 - 4.613: 99.5832% ( 1) 00:15:33.138 4.613 - 4.640: 99.5882% ( 1) 00:15:33.138 4.827 - 4.853: 99.5932% ( 1) 00:15:33.138 5.093 - 5.120: 99.5981% ( 1) 00:15:33.138 5.120 - 5.147: 99.6031% ( 1) 00:15:33.138 7.893 - 7.947: 99.6080% ( 1) 00:15:33.138 8.373 - 8.427: 99.6130% ( 1) 00:15:33.138 3986.773 - 4014.080: 100.0000% ( 78) 00:15:33.138 00:15:33.138 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:33.138 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:33.138 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:33.138 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:33.139 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.139 [ 00:15:33.139 { 00:15:33.139 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.139 "subtype": "Discovery", 00:15:33.139 "listen_addresses": [], 00:15:33.139 "allow_any_host": true, 00:15:33.139 "hosts": [] 00:15:33.139 }, 00:15:33.139 { 00:15:33.139 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:33.139 "subtype": "NVMe", 00:15:33.139 "listen_addresses": [ 00:15:33.139 { 00:15:33.139 "trtype": "VFIOUSER", 00:15:33.139 "adrfam": "IPv4", 00:15:33.139 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:33.139 "trsvcid": "0" 00:15:33.139 } 00:15:33.139 ], 00:15:33.139 "allow_any_host": true, 00:15:33.139 "hosts": [], 00:15:33.139 "serial_number": "SPDK1", 00:15:33.139 "model_number": "SPDK bdev Controller", 00:15:33.139 "max_namespaces": 32, 00:15:33.139 "min_cntlid": 1, 00:15:33.139 "max_cntlid": 65519, 00:15:33.139 "namespaces": [ 00:15:33.139 { 00:15:33.139 "nsid": 1, 00:15:33.139 "bdev_name": "Malloc1", 00:15:33.139 "name": "Malloc1", 00:15:33.139 "nguid": "DA1275700E7D41B5B6B18F60F500D028", 00:15:33.139 "uuid": "da127570-0e7d-41b5-b6b1-8f60f500d028" 00:15:33.139 }, 00:15:33.139 { 00:15:33.139 "nsid": 2, 00:15:33.139 "bdev_name": "Malloc3", 00:15:33.139 "name": "Malloc3", 00:15:33.139 "nguid": "37A7FCAA3A45478FB1607234ED74A469", 00:15:33.139 "uuid": "37a7fcaa-3a45-478f-b160-7234ed74a469" 00:15:33.139 } 00:15:33.139 ] 00:15:33.139 }, 00:15:33.139 { 00:15:33.139 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:33.139 "subtype": "NVMe", 00:15:33.139 "listen_addresses": [ 00:15:33.139 { 00:15:33.139 "trtype": "VFIOUSER", 00:15:33.139 "adrfam": "IPv4", 00:15:33.139 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:33.139 "trsvcid": "0" 00:15:33.139 } 00:15:33.139 ], 00:15:33.139 "allow_any_host": true, 00:15:33.139 "hosts": [], 00:15:33.139 "serial_number": "SPDK2", 00:15:33.139 "model_number": "SPDK bdev Controller", 00:15:33.139 "max_namespaces": 32, 00:15:33.139 "min_cntlid": 1, 00:15:33.139 "max_cntlid": 65519, 00:15:33.139 "namespaces": [ 00:15:33.139 { 00:15:33.139 "nsid": 1, 00:15:33.139 "bdev_name": "Malloc2", 00:15:33.139 "name": "Malloc2", 00:15:33.139 "nguid": "D47911E83BF2495588C86F427DB7FE51", 00:15:33.139 "uuid": "d47911e8-3bf2-4955-88c8-6f427db7fe51" 00:15:33.139 } 00:15:33.139 ] 00:15:33.139 } 00:15:33.139 ] 00:15:33.139 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:33.139 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2109252 00:15:33.139 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:33.139 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:33.139 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:33.139 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.139 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.139 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:33.139 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:33.139 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:33.429 [2024-12-06 13:23:19.872879] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.429 Malloc4 00:15:33.429 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:33.429 [2024-12-06 13:23:20.070315] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.691 Asynchronous Event Request test 00:15:33.691 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.691 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.691 Registering asynchronous event callbacks... 00:15:33.691 Starting namespace attribute notice tests for all controllers... 00:15:33.691 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:33.691 aer_cb - Changed Namespace 00:15:33.691 Cleaning up... 00:15:33.691 [ 00:15:33.691 { 00:15:33.691 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.691 "subtype": "Discovery", 00:15:33.691 "listen_addresses": [], 00:15:33.691 "allow_any_host": true, 00:15:33.691 "hosts": [] 00:15:33.691 }, 00:15:33.691 { 00:15:33.691 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:33.691 "subtype": "NVMe", 00:15:33.691 "listen_addresses": [ 00:15:33.691 { 00:15:33.691 "trtype": "VFIOUSER", 00:15:33.691 "adrfam": "IPv4", 00:15:33.691 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:33.691 "trsvcid": "0" 00:15:33.691 } 00:15:33.691 ], 00:15:33.691 "allow_any_host": true, 00:15:33.691 "hosts": [], 00:15:33.691 "serial_number": "SPDK1", 00:15:33.691 "model_number": "SPDK bdev Controller", 00:15:33.691 "max_namespaces": 32, 00:15:33.691 "min_cntlid": 1, 00:15:33.691 "max_cntlid": 65519, 00:15:33.691 "namespaces": [ 00:15:33.691 { 00:15:33.691 "nsid": 1, 00:15:33.691 "bdev_name": "Malloc1", 00:15:33.691 "name": "Malloc1", 00:15:33.691 "nguid": "DA1275700E7D41B5B6B18F60F500D028", 00:15:33.691 "uuid": "da127570-0e7d-41b5-b6b1-8f60f500d028" 00:15:33.691 }, 00:15:33.691 { 00:15:33.691 "nsid": 2, 00:15:33.691 "bdev_name": "Malloc3", 00:15:33.691 "name": "Malloc3", 00:15:33.691 "nguid": "37A7FCAA3A45478FB1607234ED74A469", 00:15:33.691 "uuid": "37a7fcaa-3a45-478f-b160-7234ed74a469" 00:15:33.691 } 00:15:33.691 ] 00:15:33.691 }, 00:15:33.691 { 00:15:33.691 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:33.691 "subtype": "NVMe", 00:15:33.691 "listen_addresses": [ 00:15:33.691 { 00:15:33.691 "trtype": "VFIOUSER", 00:15:33.691 "adrfam": "IPv4", 00:15:33.691 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:33.691 "trsvcid": "0" 00:15:33.691 } 00:15:33.691 ], 00:15:33.691 "allow_any_host": true, 00:15:33.691 "hosts": [], 00:15:33.691 "serial_number": "SPDK2", 00:15:33.691 "model_number": "SPDK bdev Controller", 00:15:33.691 "max_namespaces": 32, 00:15:33.691 "min_cntlid": 1, 00:15:33.691 "max_cntlid": 65519, 00:15:33.691 "namespaces": [ 00:15:33.691 { 00:15:33.691 "nsid": 1, 00:15:33.691 "bdev_name": "Malloc2", 00:15:33.691 "name": "Malloc2", 00:15:33.691 "nguid": "D47911E83BF2495588C86F427DB7FE51", 00:15:33.691 "uuid": "d47911e8-3bf2-4955-88c8-6f427db7fe51" 00:15:33.691 }, 00:15:33.691 { 00:15:33.691 "nsid": 2, 00:15:33.691 "bdev_name": "Malloc4", 00:15:33.691 "name": "Malloc4", 00:15:33.691 "nguid": "63C5B3B1496543998871B38E7859198E", 00:15:33.691 "uuid": "63c5b3b1-4965-4399-8871-b38e7859198e" 00:15:33.691 } 00:15:33.691 ] 00:15:33.691 } 00:15:33.691 ] 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2109252 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2100326 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2100326 ']' 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2100326 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2100326 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2100326' 00:15:33.691 killing process with pid 2100326 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2100326 00:15:33.691 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2100326 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2109488 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2109488' 00:15:33.952 Process pid: 2109488 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2109488 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2109488 ']' 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.952 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:33.952 [2024-12-06 13:23:20.540093] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:33.952 [2024-12-06 13:23:20.541027] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:15:33.952 [2024-12-06 13:23:20.541075] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.213 [2024-12-06 13:23:20.625306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.213 [2024-12-06 13:23:20.654684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.213 [2024-12-06 13:23:20.654716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.213 [2024-12-06 13:23:20.654722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.213 [2024-12-06 13:23:20.654727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.213 [2024-12-06 13:23:20.654731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.213 [2024-12-06 13:23:20.655995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.213 [2024-12-06 13:23:20.656131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.213 [2024-12-06 13:23:20.656252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.213 [2024-12-06 13:23:20.656254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.213 [2024-12-06 13:23:20.707870] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:34.213 [2024-12-06 13:23:20.708782] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:34.213 [2024-12-06 13:23:20.709526] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:34.213 [2024-12-06 13:23:20.710157] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:34.213 [2024-12-06 13:23:20.710187] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:34.785 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.785 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:34.785 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:35.728 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:35.990 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:35.990 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:35.990 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:35.990 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:35.990 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:36.250 Malloc1 00:15:36.250 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:36.511 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:36.511 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:36.771 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:36.771 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:36.771 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:37.033 Malloc2 00:15:37.033 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:37.293 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:37.293 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:37.553 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:37.553 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2109488 00:15:37.553 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2109488 ']' 00:15:37.553 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2109488 00:15:37.553 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:37.553 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.553 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2109488 00:15:37.553 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.553 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.553 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2109488' 00:15:37.553 killing process with pid 2109488 00:15:37.553 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2109488 00:15:37.553 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2109488 00:15:37.813 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:37.813 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:37.813 00:15:37.813 real 0m51.036s 00:15:37.813 user 3m15.728s 00:15:37.813 sys 0m2.628s 00:15:37.813 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.813 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:37.813 ************************************ 00:15:37.813 END TEST nvmf_vfio_user 00:15:37.813 ************************************ 00:15:37.813 13:23:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:37.813 13:23:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:37.813 13:23:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.813 13:23:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.813 ************************************ 00:15:37.813 START TEST nvmf_vfio_user_nvme_compliance 00:15:37.813 ************************************ 00:15:37.813 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:37.813 * Looking for test storage... 00:15:37.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:37.813 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:37.813 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:37.813 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:38.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.074 --rc genhtml_branch_coverage=1 00:15:38.074 --rc genhtml_function_coverage=1 00:15:38.074 --rc genhtml_legend=1 00:15:38.074 --rc geninfo_all_blocks=1 00:15:38.074 --rc geninfo_unexecuted_blocks=1 00:15:38.074 00:15:38.074 ' 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:38.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.074 --rc genhtml_branch_coverage=1 00:15:38.074 --rc genhtml_function_coverage=1 00:15:38.074 --rc genhtml_legend=1 00:15:38.074 --rc geninfo_all_blocks=1 00:15:38.074 --rc geninfo_unexecuted_blocks=1 00:15:38.074 00:15:38.074 ' 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:38.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.074 --rc genhtml_branch_coverage=1 00:15:38.074 --rc genhtml_function_coverage=1 00:15:38.074 --rc genhtml_legend=1 00:15:38.074 --rc geninfo_all_blocks=1 00:15:38.074 --rc geninfo_unexecuted_blocks=1 00:15:38.074 00:15:38.074 ' 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:38.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.074 --rc genhtml_branch_coverage=1 00:15:38.074 --rc genhtml_function_coverage=1 00:15:38.074 --rc genhtml_legend=1 00:15:38.074 --rc geninfo_all_blocks=1 00:15:38.074 --rc geninfo_unexecuted_blocks=1 00:15:38.074 00:15:38.074 ' 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.074 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:38.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2110247 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2110247' 00:15:38.075 Process pid: 2110247 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2110247 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2110247 ']' 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.075 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:38.075 [2024-12-06 13:23:24.658324] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:15:38.075 [2024-12-06 13:23:24.658383] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.335 [2024-12-06 13:23:24.743990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:38.335 [2024-12-06 13:23:24.775518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.335 [2024-12-06 13:23:24.775553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.335 [2024-12-06 13:23:24.775558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.335 [2024-12-06 13:23:24.775563] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.335 [2024-12-06 13:23:24.775567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.335 [2024-12-06 13:23:24.776780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.335 [2024-12-06 13:23:24.776930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.335 [2024-12-06 13:23:24.776931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.905 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.905 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:38.905 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:39.845 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:39.845 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:39.845 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:39.845 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.845 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:39.845 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.845 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:39.845 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:39.845 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.845 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:39.845 malloc0 00:15:39.845 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.106 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:40.106 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.106 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.106 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.106 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:40.106 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.106 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.106 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.106 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:40.106 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.106 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.106 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.106 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:40.106 00:15:40.106 00:15:40.106 CUnit - A unit testing framework for C - Version 2.1-3 00:15:40.106 http://cunit.sourceforge.net/ 00:15:40.106 00:15:40.106 00:15:40.106 Suite: nvme_compliance 00:15:40.106 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-06 13:23:26.697834] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.106 [2024-12-06 13:23:26.699139] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:40.106 [2024-12-06 13:23:26.699151] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:40.106 [2024-12-06 13:23:26.699156] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:40.106 [2024-12-06 13:23:26.700853] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.106 passed 00:15:40.368 Test: admin_identify_ctrlr_verify_fused ...[2024-12-06 13:23:26.778357] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.368 [2024-12-06 13:23:26.781370] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.368 passed 00:15:40.368 Test: admin_identify_ns ...[2024-12-06 13:23:26.858818] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.368 [2024-12-06 13:23:26.918465] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:40.368 [2024-12-06 13:23:26.926469] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:40.368 [2024-12-06 13:23:26.947549] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.368 passed 00:15:40.368 Test: admin_get_features_mandatory_features ...[2024-12-06 13:23:27.023606] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.629 [2024-12-06 13:23:27.026627] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.629 passed 00:15:40.629 Test: admin_get_features_optional_features ...[2024-12-06 13:23:27.105104] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.630 [2024-12-06 13:23:27.108123] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.630 passed 00:15:40.630 Test: admin_set_features_number_of_queues ...[2024-12-06 13:23:27.182879] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.890 [2024-12-06 13:23:27.287542] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.890 passed 00:15:40.890 Test: admin_get_log_page_mandatory_logs ...[2024-12-06 13:23:27.360748] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.890 [2024-12-06 13:23:27.363762] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.890 passed 00:15:40.890 Test: admin_get_log_page_with_lpo ...[2024-12-06 13:23:27.440520] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.890 [2024-12-06 13:23:27.509464] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:40.890 [2024-12-06 13:23:27.522512] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.150 passed 00:15:41.150 Test: fabric_property_get ...[2024-12-06 13:23:27.595723] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.150 [2024-12-06 13:23:27.596923] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:41.150 [2024-12-06 13:23:27.598745] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.150 passed 00:15:41.150 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-06 13:23:27.675222] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.150 [2024-12-06 13:23:27.676425] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:41.150 [2024-12-06 13:23:27.678245] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.150 passed 00:15:41.150 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-06 13:23:27.753806] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.411 [2024-12-06 13:23:27.841460] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.411 [2024-12-06 13:23:27.857460] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.411 [2024-12-06 13:23:27.862534] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.411 passed 00:15:41.411 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-06 13:23:27.938457] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.411 [2024-12-06 13:23:27.939662] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:41.411 [2024-12-06 13:23:27.941469] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.411 passed 00:15:41.411 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-06 13:23:28.016186] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.673 [2024-12-06 13:23:28.091230] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:41.673 [2024-12-06 13:23:28.114462] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.673 [2024-12-06 13:23:28.119524] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.673 passed 00:15:41.673 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-06 13:23:28.196414] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.673 [2024-12-06 13:23:28.197623] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:41.673 [2024-12-06 13:23:28.197640] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:41.673 [2024-12-06 13:23:28.199435] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.673 passed 00:15:41.673 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-06 13:23:28.273162] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.934 [2024-12-06 13:23:28.365460] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:41.934 [2024-12-06 13:23:28.373464] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:41.934 [2024-12-06 13:23:28.381462] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:41.934 [2024-12-06 13:23:28.389460] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:41.934 [2024-12-06 13:23:28.418530] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.934 passed 00:15:41.934 Test: admin_create_io_sq_verify_pc ...[2024-12-06 13:23:28.492014] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.934 [2024-12-06 13:23:28.507468] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:41.934 [2024-12-06 13:23:28.525148] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.934 passed 00:15:42.196 Test: admin_create_io_qp_max_qps ...[2024-12-06 13:23:28.602598] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.137 [2024-12-06 13:23:29.715462] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:43.708 [2024-12-06 13:23:30.101177] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.708 passed 00:15:43.708 Test: admin_create_io_sq_shared_cq ...[2024-12-06 13:23:30.176008] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.708 [2024-12-06 13:23:30.306503] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:43.708 [2024-12-06 13:23:30.345503] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.968 passed 00:15:43.968 00:15:43.968 Run Summary: Type Total Ran Passed Failed Inactive 00:15:43.968 suites 1 1 n/a 0 0 00:15:43.968 tests 18 18 18 0 0 00:15:43.968 asserts 360 360 360 0 n/a 00:15:43.968 00:15:43.968 Elapsed time = 1.500 seconds 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2110247 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2110247 ']' 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2110247 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2110247 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2110247' 00:15:43.968 killing process with pid 2110247 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2110247 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2110247 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:43.968 00:15:43.968 real 0m6.210s 00:15:43.968 user 0m17.610s 00:15:43.968 sys 0m0.538s 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.968 ************************************ 00:15:43.968 END TEST nvmf_vfio_user_nvme_compliance 00:15:43.968 ************************************ 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.968 13:23:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:44.229 ************************************ 00:15:44.229 START TEST nvmf_vfio_user_fuzz 00:15:44.229 ************************************ 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:44.229 * Looking for test storage... 00:15:44.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:44.229 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:44.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.230 --rc genhtml_branch_coverage=1 00:15:44.230 --rc genhtml_function_coverage=1 00:15:44.230 --rc genhtml_legend=1 00:15:44.230 --rc geninfo_all_blocks=1 00:15:44.230 --rc geninfo_unexecuted_blocks=1 00:15:44.230 00:15:44.230 ' 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:44.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.230 --rc genhtml_branch_coverage=1 00:15:44.230 --rc genhtml_function_coverage=1 00:15:44.230 --rc genhtml_legend=1 00:15:44.230 --rc geninfo_all_blocks=1 00:15:44.230 --rc geninfo_unexecuted_blocks=1 00:15:44.230 00:15:44.230 ' 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:44.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.230 --rc genhtml_branch_coverage=1 00:15:44.230 --rc genhtml_function_coverage=1 00:15:44.230 --rc genhtml_legend=1 00:15:44.230 --rc geninfo_all_blocks=1 00:15:44.230 --rc geninfo_unexecuted_blocks=1 00:15:44.230 00:15:44.230 ' 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:44.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.230 --rc genhtml_branch_coverage=1 00:15:44.230 --rc genhtml_function_coverage=1 00:15:44.230 --rc genhtml_legend=1 00:15:44.230 --rc geninfo_all_blocks=1 00:15:44.230 --rc geninfo_unexecuted_blocks=1 00:15:44.230 00:15:44.230 ' 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:44.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2111650 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2111650' 00:15:44.230 Process pid: 2111650 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2111650 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2111650 ']' 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.230 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.231 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.491 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.491 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:45.432 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.432 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:45.432 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.372 malloc0 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:46.372 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.373 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.373 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.373 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:46.373 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.373 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.373 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.373 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:46.373 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:18.498 Fuzzing completed. Shutting down the fuzz application 00:16:18.498 00:16:18.498 Dumping successful admin opcodes: 00:16:18.498 9, 10, 00:16:18.498 Dumping successful io opcodes: 00:16:18.498 0, 00:16:18.498 NS: 0x20000081ef00 I/O qp, Total commands completed: 1294909, total successful commands: 5077, random_seed: 3338297216 00:16:18.498 NS: 0x20000081ef00 admin qp, Total commands completed: 292176, total successful commands: 69, random_seed: 907275520 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2111650 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2111650 ']' 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2111650 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2111650 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2111650' 00:16:18.498 killing process with pid 2111650 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2111650 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2111650 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:18.498 00:16:18.498 real 0m32.797s 00:16:18.498 user 0m34.477s 00:16:18.498 sys 0m26.849s 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.498 ************************************ 00:16:18.498 END TEST nvmf_vfio_user_fuzz 00:16:18.498 ************************************ 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:18.498 ************************************ 00:16:18.498 START TEST nvmf_auth_target 00:16:18.498 ************************************ 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:18.498 * Looking for test storage... 00:16:18.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:18.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.498 --rc genhtml_branch_coverage=1 00:16:18.498 --rc genhtml_function_coverage=1 00:16:18.498 --rc genhtml_legend=1 00:16:18.498 --rc geninfo_all_blocks=1 00:16:18.498 --rc geninfo_unexecuted_blocks=1 00:16:18.498 00:16:18.498 ' 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:18.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.498 --rc genhtml_branch_coverage=1 00:16:18.498 --rc genhtml_function_coverage=1 00:16:18.498 --rc genhtml_legend=1 00:16:18.498 --rc geninfo_all_blocks=1 00:16:18.498 --rc geninfo_unexecuted_blocks=1 00:16:18.498 00:16:18.498 ' 00:16:18.498 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:18.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.498 --rc genhtml_branch_coverage=1 00:16:18.498 --rc genhtml_function_coverage=1 00:16:18.498 --rc genhtml_legend=1 00:16:18.498 --rc geninfo_all_blocks=1 00:16:18.499 --rc geninfo_unexecuted_blocks=1 00:16:18.499 00:16:18.499 ' 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:18.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.499 --rc genhtml_branch_coverage=1 00:16:18.499 --rc genhtml_function_coverage=1 00:16:18.499 --rc genhtml_legend=1 00:16:18.499 --rc geninfo_all_blocks=1 00:16:18.499 --rc geninfo_unexecuted_blocks=1 00:16:18.499 00:16:18.499 ' 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:18.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:18.499 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:25.081 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:25.082 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:25.082 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:25.082 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:25.082 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.082 13:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:25.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:16:25.082 00:16:25.082 --- 10.0.0.2 ping statistics --- 00:16:25.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.082 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:16:25.082 00:16:25.082 --- 10.0.0.1 ping statistics --- 00:16:25.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.082 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2122187 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2122187 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2122187 ']' 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.082 13:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.654 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.654 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:25.654 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:25.654 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:25.654 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.654 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.654 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2122431 00:16:25.654 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5a99da208cd0c978737ef4c0b289fd20547622a6cacf4f6c 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vaQ 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5a99da208cd0c978737ef4c0b289fd20547622a6cacf4f6c 0 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5a99da208cd0c978737ef4c0b289fd20547622a6cacf4f6c 0 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5a99da208cd0c978737ef4c0b289fd20547622a6cacf4f6c 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:25.655 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vaQ 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vaQ 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.vaQ 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1e19a35652482be9ed9ed905581302c9e77bfab7f65568b4460be9e439f29ef7 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4ss 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1e19a35652482be9ed9ed905581302c9e77bfab7f65568b4460be9e439f29ef7 3 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1e19a35652482be9ed9ed905581302c9e77bfab7f65568b4460be9e439f29ef7 3 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1e19a35652482be9ed9ed905581302c9e77bfab7f65568b4460be9e439f29ef7 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4ss 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4ss 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.4ss 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fdc4dfaa26f60f9b613ac0512074a1f2 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UPK 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fdc4dfaa26f60f9b613ac0512074a1f2 1 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fdc4dfaa26f60f9b613ac0512074a1f2 1 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fdc4dfaa26f60f9b613ac0512074a1f2 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UPK 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UPK 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.UPK 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d11337ea8c4bf8c90e3c3de68682656a0e0d81befd824829 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.edy 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d11337ea8c4bf8c90e3c3de68682656a0e0d81befd824829 2 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d11337ea8c4bf8c90e3c3de68682656a0e0d81befd824829 2 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d11337ea8c4bf8c90e3c3de68682656a0e0d81befd824829 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.edy 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.edy 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.edy 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4ecf939868fc9e1e78eb82aa71629d917496a7655a49a51a 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zlI 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4ecf939868fc9e1e78eb82aa71629d917496a7655a49a51a 2 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4ecf939868fc9e1e78eb82aa71629d917496a7655a49a51a 2 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4ecf939868fc9e1e78eb82aa71629d917496a7655a49a51a 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:25.917 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.179 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zlI 00:16:26.179 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zlI 00:16:26.179 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.zlI 00:16:26.179 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:26.179 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a81e5f9d469c6edef2a523bbf0041ae8 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.m9i 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a81e5f9d469c6edef2a523bbf0041ae8 1 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a81e5f9d469c6edef2a523bbf0041ae8 1 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a81e5f9d469c6edef2a523bbf0041ae8 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.m9i 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.m9i 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.m9i 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6058f8b2f27c9ccfa87b95d08fbb0fbaea7121e4c78f28a1eb835a433606f7d0 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8H0 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6058f8b2f27c9ccfa87b95d08fbb0fbaea7121e4c78f28a1eb835a433606f7d0 3 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6058f8b2f27c9ccfa87b95d08fbb0fbaea7121e4c78f28a1eb835a433606f7d0 3 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6058f8b2f27c9ccfa87b95d08fbb0fbaea7121e4c78f28a1eb835a433606f7d0 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8H0 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8H0 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.8H0 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2122187 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2122187 ']' 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.180 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.442 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.442 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:26.442 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2122431 /var/tmp/host.sock 00:16:26.442 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2122431 ']' 00:16:26.442 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:26.442 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.442 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:26.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:26.442 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.442 13:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.703 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.703 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:26.703 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:26.703 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.703 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.703 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.703 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:26.703 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vaQ 00:16:26.703 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.703 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.703 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.703 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vaQ 00:16:26.703 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vaQ 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.4ss ]] 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4ss 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4ss 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4ss 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.UPK 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.UPK 00:16:26.964 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.UPK 00:16:27.227 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.edy ]] 00:16:27.227 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.edy 00:16:27.227 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.227 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.227 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.227 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.edy 00:16:27.227 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.edy 00:16:27.488 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:27.488 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.zlI 00:16:27.488 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.488 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.488 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.488 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.zlI 00:16:27.488 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.zlI 00:16:27.488 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.m9i ]] 00:16:27.488 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.m9i 00:16:27.488 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.488 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.488 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.488 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.m9i 00:16:27.488 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.m9i 00:16:27.748 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:27.748 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8H0 00:16:27.748 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.748 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.748 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.748 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.8H0 00:16:27.748 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.8H0 00:16:28.008 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:28.008 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:28.008 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.008 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.008 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.008 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.269 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:28.269 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.269 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.270 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:28.270 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.270 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.270 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.270 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.270 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.270 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.270 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.270 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.270 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.530 00:16:28.530 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.530 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.530 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.530 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.530 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.530 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.530 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.790 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.790 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.790 { 00:16:28.790 "cntlid": 1, 00:16:28.790 "qid": 0, 00:16:28.790 "state": "enabled", 00:16:28.790 "thread": "nvmf_tgt_poll_group_000", 00:16:28.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:28.790 "listen_address": { 00:16:28.790 "trtype": "TCP", 00:16:28.790 "adrfam": "IPv4", 00:16:28.790 "traddr": "10.0.0.2", 00:16:28.790 "trsvcid": "4420" 00:16:28.790 }, 00:16:28.790 "peer_address": { 00:16:28.790 "trtype": "TCP", 00:16:28.790 "adrfam": "IPv4", 00:16:28.790 "traddr": "10.0.0.1", 00:16:28.790 "trsvcid": "45428" 00:16:28.790 }, 00:16:28.790 "auth": { 00:16:28.790 "state": "completed", 00:16:28.790 "digest": "sha256", 00:16:28.790 "dhgroup": "null" 00:16:28.790 } 00:16:28.790 } 00:16:28.790 ]' 00:16:28.790 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.790 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.790 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.790 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.790 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.790 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.790 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.790 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.048 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:16:29.048 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:16:29.616 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.616 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:29.616 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.616 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.616 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.616 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.616 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.616 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.875 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:29.875 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.875 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.875 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.875 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.875 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.875 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.875 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.875 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.875 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.875 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.875 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.875 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.136 00:16:30.136 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.136 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.136 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.136 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.136 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.136 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.136 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.136 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.137 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.137 { 00:16:30.137 "cntlid": 3, 00:16:30.137 "qid": 0, 00:16:30.137 "state": "enabled", 00:16:30.137 "thread": "nvmf_tgt_poll_group_000", 00:16:30.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:30.137 "listen_address": { 00:16:30.137 "trtype": "TCP", 00:16:30.137 "adrfam": "IPv4", 00:16:30.137 "traddr": "10.0.0.2", 00:16:30.137 "trsvcid": "4420" 00:16:30.137 }, 00:16:30.137 "peer_address": { 00:16:30.137 "trtype": "TCP", 00:16:30.137 "adrfam": "IPv4", 00:16:30.137 "traddr": "10.0.0.1", 00:16:30.137 "trsvcid": "45450" 00:16:30.137 }, 00:16:30.137 "auth": { 00:16:30.137 "state": "completed", 00:16:30.137 "digest": "sha256", 00:16:30.137 "dhgroup": "null" 00:16:30.137 } 00:16:30.137 } 00:16:30.137 ]' 00:16:30.137 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.398 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.398 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.398 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:30.398 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.398 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.398 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.398 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.659 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:16:30.659 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.229 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.490 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.490 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.490 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.490 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.490 00:16:31.490 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.490 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.490 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.750 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.750 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.750 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.750 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.750 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.750 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.750 { 00:16:31.750 "cntlid": 5, 00:16:31.750 "qid": 0, 00:16:31.750 "state": "enabled", 00:16:31.750 "thread": "nvmf_tgt_poll_group_000", 00:16:31.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:31.750 "listen_address": { 00:16:31.750 "trtype": "TCP", 00:16:31.750 "adrfam": "IPv4", 00:16:31.750 "traddr": "10.0.0.2", 00:16:31.750 "trsvcid": "4420" 00:16:31.750 }, 00:16:31.750 "peer_address": { 00:16:31.750 "trtype": "TCP", 00:16:31.750 "adrfam": "IPv4", 00:16:31.750 "traddr": "10.0.0.1", 00:16:31.750 "trsvcid": "38396" 00:16:31.750 }, 00:16:31.750 "auth": { 00:16:31.750 "state": "completed", 00:16:31.750 "digest": "sha256", 00:16:31.750 "dhgroup": "null" 00:16:31.750 } 00:16:31.750 } 00:16:31.750 ]' 00:16:31.750 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.750 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.750 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.010 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:32.010 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.010 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.010 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.010 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.010 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:16:32.010 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.951 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.211 00:16:33.211 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.211 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.211 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.211 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.211 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.211 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.211 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.211 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.211 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.211 { 00:16:33.211 "cntlid": 7, 00:16:33.211 "qid": 0, 00:16:33.211 "state": "enabled", 00:16:33.211 "thread": "nvmf_tgt_poll_group_000", 00:16:33.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:33.211 "listen_address": { 00:16:33.211 "trtype": "TCP", 00:16:33.211 "adrfam": "IPv4", 00:16:33.211 "traddr": "10.0.0.2", 00:16:33.211 "trsvcid": "4420" 00:16:33.211 }, 00:16:33.211 "peer_address": { 00:16:33.211 "trtype": "TCP", 00:16:33.211 "adrfam": "IPv4", 00:16:33.211 "traddr": "10.0.0.1", 00:16:33.211 "trsvcid": "38430" 00:16:33.211 }, 00:16:33.211 "auth": { 00:16:33.211 "state": "completed", 00:16:33.211 "digest": "sha256", 00:16:33.211 "dhgroup": "null" 00:16:33.211 } 00:16:33.211 } 00:16:33.211 ]' 00:16:33.211 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.472 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.472 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.472 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:33.472 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.472 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.472 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.472 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.732 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:16:33.732 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.303 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.563 00:16:34.563 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.563 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.563 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.824 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.824 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.824 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.824 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.824 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.824 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.824 { 00:16:34.824 "cntlid": 9, 00:16:34.824 "qid": 0, 00:16:34.824 "state": "enabled", 00:16:34.824 "thread": "nvmf_tgt_poll_group_000", 00:16:34.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:34.824 "listen_address": { 00:16:34.824 "trtype": "TCP", 00:16:34.824 "adrfam": "IPv4", 00:16:34.824 "traddr": "10.0.0.2", 00:16:34.824 "trsvcid": "4420" 00:16:34.824 }, 00:16:34.824 "peer_address": { 00:16:34.824 "trtype": "TCP", 00:16:34.824 "adrfam": "IPv4", 00:16:34.824 "traddr": "10.0.0.1", 00:16:34.824 "trsvcid": "38444" 00:16:34.824 }, 00:16:34.824 "auth": { 00:16:34.824 "state": "completed", 00:16:34.824 "digest": "sha256", 00:16:34.824 "dhgroup": "ffdhe2048" 00:16:34.824 } 00:16:34.824 } 00:16:34.824 ]' 00:16:34.824 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.824 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.824 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.824 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.824 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.084 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.084 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.084 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.084 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:16:35.084 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:16:35.671 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.671 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:35.671 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.671 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.671 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.671 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.671 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.671 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.931 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:35.931 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.931 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.931 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.931 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:35.931 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.931 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.931 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.931 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.931 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.931 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.931 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.931 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.192 00:16:36.192 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.192 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.192 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.452 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.452 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.452 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.452 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.452 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.452 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.452 { 00:16:36.452 "cntlid": 11, 00:16:36.452 "qid": 0, 00:16:36.452 "state": "enabled", 00:16:36.452 "thread": "nvmf_tgt_poll_group_000", 00:16:36.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:36.452 "listen_address": { 00:16:36.452 "trtype": "TCP", 00:16:36.452 "adrfam": "IPv4", 00:16:36.452 "traddr": "10.0.0.2", 00:16:36.452 "trsvcid": "4420" 00:16:36.452 }, 00:16:36.452 "peer_address": { 00:16:36.452 "trtype": "TCP", 00:16:36.452 "adrfam": "IPv4", 00:16:36.452 "traddr": "10.0.0.1", 00:16:36.452 "trsvcid": "38480" 00:16:36.452 }, 00:16:36.452 "auth": { 00:16:36.452 "state": "completed", 00:16:36.452 "digest": "sha256", 00:16:36.452 "dhgroup": "ffdhe2048" 00:16:36.452 } 00:16:36.452 } 00:16:36.452 ]' 00:16:36.452 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.452 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.452 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.452 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.452 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.452 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.452 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.453 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.713 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:16:36.713 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:16:37.283 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.283 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:37.283 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.283 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.283 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.283 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.283 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.283 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.544 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:37.544 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.544 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.544 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.544 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.544 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.544 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.544 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.544 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.544 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.544 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.544 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.544 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.804 00:16:37.805 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.805 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.805 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.805 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.805 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.805 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.805 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.805 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.805 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.805 { 00:16:37.805 "cntlid": 13, 00:16:37.805 "qid": 0, 00:16:37.805 "state": "enabled", 00:16:37.805 "thread": "nvmf_tgt_poll_group_000", 00:16:37.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:37.805 "listen_address": { 00:16:37.805 "trtype": "TCP", 00:16:37.805 "adrfam": "IPv4", 00:16:37.805 "traddr": "10.0.0.2", 00:16:37.805 "trsvcid": "4420" 00:16:37.805 }, 00:16:37.805 "peer_address": { 00:16:37.805 "trtype": "TCP", 00:16:37.805 "adrfam": "IPv4", 00:16:37.805 "traddr": "10.0.0.1", 00:16:37.805 "trsvcid": "38498" 00:16:37.805 }, 00:16:37.805 "auth": { 00:16:37.805 "state": "completed", 00:16:37.805 "digest": "sha256", 00:16:37.805 "dhgroup": "ffdhe2048" 00:16:37.805 } 00:16:37.805 } 00:16:37.805 ]' 00:16:37.805 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.065 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.065 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.065 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.065 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.065 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.065 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.065 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.326 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:16:38.326 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:38.894 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.895 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:38.895 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.895 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.895 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.895 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.895 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.895 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.154 00:16:39.154 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.154 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.154 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.414 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.414 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.414 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.414 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.414 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.414 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.414 { 00:16:39.414 "cntlid": 15, 00:16:39.414 "qid": 0, 00:16:39.414 "state": "enabled", 00:16:39.415 "thread": "nvmf_tgt_poll_group_000", 00:16:39.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:39.415 "listen_address": { 00:16:39.415 "trtype": "TCP", 00:16:39.415 "adrfam": "IPv4", 00:16:39.415 "traddr": "10.0.0.2", 00:16:39.415 "trsvcid": "4420" 00:16:39.415 }, 00:16:39.415 "peer_address": { 00:16:39.415 "trtype": "TCP", 00:16:39.415 "adrfam": "IPv4", 00:16:39.415 "traddr": "10.0.0.1", 00:16:39.415 "trsvcid": "38538" 00:16:39.415 }, 00:16:39.415 "auth": { 00:16:39.415 "state": "completed", 00:16:39.415 "digest": "sha256", 00:16:39.415 "dhgroup": "ffdhe2048" 00:16:39.415 } 00:16:39.415 } 00:16:39.415 ]' 00:16:39.415 13:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.415 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.415 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.415 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.415 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.675 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.675 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.675 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.675 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:16:39.675 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:16:40.244 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.244 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.244 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.244 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.244 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.244 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.244 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.244 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.244 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.504 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:40.504 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.504 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.504 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.504 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:40.504 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.504 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.504 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.504 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.504 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.504 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.504 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.504 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.764 00:16:40.764 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.764 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.764 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.024 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.024 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.024 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.024 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.024 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.025 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.025 { 00:16:41.025 "cntlid": 17, 00:16:41.025 "qid": 0, 00:16:41.025 "state": "enabled", 00:16:41.025 "thread": "nvmf_tgt_poll_group_000", 00:16:41.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:41.025 "listen_address": { 00:16:41.025 "trtype": "TCP", 00:16:41.025 "adrfam": "IPv4", 00:16:41.025 "traddr": "10.0.0.2", 00:16:41.025 "trsvcid": "4420" 00:16:41.025 }, 00:16:41.025 "peer_address": { 00:16:41.025 "trtype": "TCP", 00:16:41.025 "adrfam": "IPv4", 00:16:41.025 "traddr": "10.0.0.1", 00:16:41.025 "trsvcid": "60844" 00:16:41.025 }, 00:16:41.025 "auth": { 00:16:41.025 "state": "completed", 00:16:41.025 "digest": "sha256", 00:16:41.025 "dhgroup": "ffdhe3072" 00:16:41.025 } 00:16:41.025 } 00:16:41.025 ]' 00:16:41.025 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.025 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.025 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.025 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.025 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.025 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.025 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.025 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.290 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:16:41.290 13:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.983 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.315 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.315 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.315 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.315 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.315 00:16:42.315 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.315 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.315 13:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.591 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.591 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.591 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.592 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.592 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.592 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.592 { 00:16:42.592 "cntlid": 19, 00:16:42.592 "qid": 0, 00:16:42.592 "state": "enabled", 00:16:42.592 "thread": "nvmf_tgt_poll_group_000", 00:16:42.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:42.592 "listen_address": { 00:16:42.592 "trtype": "TCP", 00:16:42.592 "adrfam": "IPv4", 00:16:42.592 "traddr": "10.0.0.2", 00:16:42.592 "trsvcid": "4420" 00:16:42.592 }, 00:16:42.592 "peer_address": { 00:16:42.592 "trtype": "TCP", 00:16:42.592 "adrfam": "IPv4", 00:16:42.592 "traddr": "10.0.0.1", 00:16:42.592 "trsvcid": "60874" 00:16:42.592 }, 00:16:42.592 "auth": { 00:16:42.592 "state": "completed", 00:16:42.592 "digest": "sha256", 00:16:42.592 "dhgroup": "ffdhe3072" 00:16:42.592 } 00:16:42.592 } 00:16:42.592 ]' 00:16:42.592 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.592 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.592 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.592 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.592 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.592 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.592 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.592 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.856 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:16:42.856 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:16:43.424 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.424 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.424 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.424 13:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.424 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.424 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.424 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.424 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.683 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:43.683 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.683 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.683 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.683 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.683 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.683 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.683 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.683 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.683 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.683 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.683 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.683 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.942 00:16:43.942 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.942 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.942 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.202 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.202 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.202 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.203 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.203 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.203 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.203 { 00:16:44.203 "cntlid": 21, 00:16:44.203 "qid": 0, 00:16:44.203 "state": "enabled", 00:16:44.203 "thread": "nvmf_tgt_poll_group_000", 00:16:44.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:44.203 "listen_address": { 00:16:44.203 "trtype": "TCP", 00:16:44.203 "adrfam": "IPv4", 00:16:44.203 "traddr": "10.0.0.2", 00:16:44.203 "trsvcid": "4420" 00:16:44.203 }, 00:16:44.203 "peer_address": { 00:16:44.203 "trtype": "TCP", 00:16:44.203 "adrfam": "IPv4", 00:16:44.203 "traddr": "10.0.0.1", 00:16:44.203 "trsvcid": "60896" 00:16:44.203 }, 00:16:44.203 "auth": { 00:16:44.203 "state": "completed", 00:16:44.203 "digest": "sha256", 00:16:44.203 "dhgroup": "ffdhe3072" 00:16:44.203 } 00:16:44.203 } 00:16:44.203 ]' 00:16:44.203 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.203 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.203 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.203 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.203 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.203 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.203 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.203 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.463 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:16:44.463 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:16:45.031 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.031 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.031 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.031 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.031 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.031 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.031 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.031 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.290 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:45.290 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.290 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.290 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:45.290 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.290 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.290 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:45.290 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.290 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.290 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.290 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.290 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.290 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.550 00:16:45.550 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.550 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.551 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.812 { 00:16:45.812 "cntlid": 23, 00:16:45.812 "qid": 0, 00:16:45.812 "state": "enabled", 00:16:45.812 "thread": "nvmf_tgt_poll_group_000", 00:16:45.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:45.812 "listen_address": { 00:16:45.812 "trtype": "TCP", 00:16:45.812 "adrfam": "IPv4", 00:16:45.812 "traddr": "10.0.0.2", 00:16:45.812 "trsvcid": "4420" 00:16:45.812 }, 00:16:45.812 "peer_address": { 00:16:45.812 "trtype": "TCP", 00:16:45.812 "adrfam": "IPv4", 00:16:45.812 "traddr": "10.0.0.1", 00:16:45.812 "trsvcid": "60922" 00:16:45.812 }, 00:16:45.812 "auth": { 00:16:45.812 "state": "completed", 00:16:45.812 "digest": "sha256", 00:16:45.812 "dhgroup": "ffdhe3072" 00:16:45.812 } 00:16:45.812 } 00:16:45.812 ]' 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.812 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.073 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:16:46.073 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:16:46.643 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.643 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.643 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.643 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.643 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.643 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.643 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.643 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.643 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.904 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:46.904 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.904 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.904 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.904 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.904 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.904 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.904 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.904 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.904 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.904 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.904 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.904 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.165 00:16:47.165 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.165 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.165 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.165 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.165 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.165 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.165 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.165 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.165 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.165 { 00:16:47.165 "cntlid": 25, 00:16:47.165 "qid": 0, 00:16:47.165 "state": "enabled", 00:16:47.165 "thread": "nvmf_tgt_poll_group_000", 00:16:47.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:47.165 "listen_address": { 00:16:47.165 "trtype": "TCP", 00:16:47.165 "adrfam": "IPv4", 00:16:47.165 "traddr": "10.0.0.2", 00:16:47.165 "trsvcid": "4420" 00:16:47.165 }, 00:16:47.165 "peer_address": { 00:16:47.165 "trtype": "TCP", 00:16:47.165 "adrfam": "IPv4", 00:16:47.165 "traddr": "10.0.0.1", 00:16:47.165 "trsvcid": "60948" 00:16:47.165 }, 00:16:47.165 "auth": { 00:16:47.165 "state": "completed", 00:16:47.165 "digest": "sha256", 00:16:47.165 "dhgroup": "ffdhe4096" 00:16:47.165 } 00:16:47.165 } 00:16:47.165 ]' 00:16:47.165 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.426 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.426 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.426 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.426 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.426 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.426 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.426 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.686 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:16:47.686 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.258 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.519 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.519 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.519 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.519 00:16:48.778 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.778 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.778 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.778 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.778 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.778 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.778 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.778 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.779 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.779 { 00:16:48.779 "cntlid": 27, 00:16:48.779 "qid": 0, 00:16:48.779 "state": "enabled", 00:16:48.779 "thread": "nvmf_tgt_poll_group_000", 00:16:48.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:48.779 "listen_address": { 00:16:48.779 "trtype": "TCP", 00:16:48.779 "adrfam": "IPv4", 00:16:48.779 "traddr": "10.0.0.2", 00:16:48.779 "trsvcid": "4420" 00:16:48.779 }, 00:16:48.779 "peer_address": { 00:16:48.779 "trtype": "TCP", 00:16:48.779 "adrfam": "IPv4", 00:16:48.779 "traddr": "10.0.0.1", 00:16:48.779 "trsvcid": "60976" 00:16:48.779 }, 00:16:48.779 "auth": { 00:16:48.779 "state": "completed", 00:16:48.779 "digest": "sha256", 00:16:48.779 "dhgroup": "ffdhe4096" 00:16:48.779 } 00:16:48.779 } 00:16:48.779 ]' 00:16:48.779 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.779 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.779 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.038 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.038 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.038 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.038 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.038 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.038 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:16:49.038 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.979 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.980 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.980 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.240 00:16:50.240 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.240 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.240 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.500 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.500 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.500 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.500 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.500 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.500 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.500 { 00:16:50.500 "cntlid": 29, 00:16:50.500 "qid": 0, 00:16:50.500 "state": "enabled", 00:16:50.500 "thread": "nvmf_tgt_poll_group_000", 00:16:50.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:50.500 "listen_address": { 00:16:50.500 "trtype": "TCP", 00:16:50.500 "adrfam": "IPv4", 00:16:50.500 "traddr": "10.0.0.2", 00:16:50.500 "trsvcid": "4420" 00:16:50.500 }, 00:16:50.500 "peer_address": { 00:16:50.500 "trtype": "TCP", 00:16:50.500 "adrfam": "IPv4", 00:16:50.500 "traddr": "10.0.0.1", 00:16:50.500 "trsvcid": "60998" 00:16:50.500 }, 00:16:50.500 "auth": { 00:16:50.500 "state": "completed", 00:16:50.500 "digest": "sha256", 00:16:50.500 "dhgroup": "ffdhe4096" 00:16:50.500 } 00:16:50.500 } 00:16:50.500 ]' 00:16:50.500 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.500 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.500 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.501 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.501 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.501 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.501 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.501 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.760 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:16:50.760 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:16:51.330 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.330 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.330 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.330 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.330 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.330 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.330 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.330 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.591 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:51.591 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.591 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.591 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:51.591 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.591 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.591 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:51.591 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.591 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.591 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.591 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.591 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.591 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.852 00:16:51.852 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.852 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.852 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.112 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.113 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.113 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.113 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.113 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.113 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.113 { 00:16:52.113 "cntlid": 31, 00:16:52.113 "qid": 0, 00:16:52.113 "state": "enabled", 00:16:52.113 "thread": "nvmf_tgt_poll_group_000", 00:16:52.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:52.113 "listen_address": { 00:16:52.113 "trtype": "TCP", 00:16:52.113 "adrfam": "IPv4", 00:16:52.113 "traddr": "10.0.0.2", 00:16:52.113 "trsvcid": "4420" 00:16:52.113 }, 00:16:52.113 "peer_address": { 00:16:52.113 "trtype": "TCP", 00:16:52.113 "adrfam": "IPv4", 00:16:52.113 "traddr": "10.0.0.1", 00:16:52.113 "trsvcid": "48414" 00:16:52.113 }, 00:16:52.113 "auth": { 00:16:52.113 "state": "completed", 00:16:52.113 "digest": "sha256", 00:16:52.113 "dhgroup": "ffdhe4096" 00:16:52.113 } 00:16:52.113 } 00:16:52.113 ]' 00:16:52.113 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.113 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.113 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.113 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.113 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.113 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.113 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.113 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.372 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:16:52.372 13:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:16:52.941 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.942 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.942 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.942 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.942 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.942 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.942 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.942 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.942 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.202 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:53.202 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.202 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.202 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.202 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.202 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.202 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.202 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.202 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.202 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.202 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.202 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.202 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.463 00:16:53.463 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.463 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.463 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.724 { 00:16:53.724 "cntlid": 33, 00:16:53.724 "qid": 0, 00:16:53.724 "state": "enabled", 00:16:53.724 "thread": "nvmf_tgt_poll_group_000", 00:16:53.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:53.724 "listen_address": { 00:16:53.724 "trtype": "TCP", 00:16:53.724 "adrfam": "IPv4", 00:16:53.724 "traddr": "10.0.0.2", 00:16:53.724 "trsvcid": "4420" 00:16:53.724 }, 00:16:53.724 "peer_address": { 00:16:53.724 "trtype": "TCP", 00:16:53.724 "adrfam": "IPv4", 00:16:53.724 "traddr": "10.0.0.1", 00:16:53.724 "trsvcid": "48438" 00:16:53.724 }, 00:16:53.724 "auth": { 00:16:53.724 "state": "completed", 00:16:53.724 "digest": "sha256", 00:16:53.724 "dhgroup": "ffdhe6144" 00:16:53.724 } 00:16:53.724 } 00:16:53.724 ]' 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.724 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.984 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:16:53.985 13:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:16:54.554 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.554 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:54.554 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.554 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.554 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.554 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.554 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.554 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.814 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:54.814 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.814 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.814 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:54.814 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.814 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.814 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.814 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.814 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.814 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.814 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.814 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.814 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.074 00:16:55.074 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.074 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.074 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.335 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.335 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.335 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.335 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.335 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.335 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.335 { 00:16:55.335 "cntlid": 35, 00:16:55.335 "qid": 0, 00:16:55.335 "state": "enabled", 00:16:55.335 "thread": "nvmf_tgt_poll_group_000", 00:16:55.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.335 "listen_address": { 00:16:55.335 "trtype": "TCP", 00:16:55.335 "adrfam": "IPv4", 00:16:55.335 "traddr": "10.0.0.2", 00:16:55.335 "trsvcid": "4420" 00:16:55.335 }, 00:16:55.335 "peer_address": { 00:16:55.335 "trtype": "TCP", 00:16:55.335 "adrfam": "IPv4", 00:16:55.335 "traddr": "10.0.0.1", 00:16:55.335 "trsvcid": "48460" 00:16:55.335 }, 00:16:55.335 "auth": { 00:16:55.335 "state": "completed", 00:16:55.335 "digest": "sha256", 00:16:55.335 "dhgroup": "ffdhe6144" 00:16:55.335 } 00:16:55.335 } 00:16:55.335 ]' 00:16:55.335 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.335 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.335 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.335 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.335 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.596 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.596 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.596 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.596 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:16:55.596 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:16:56.168 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.168 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.168 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.168 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.429 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.429 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.429 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.429 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.429 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:56.429 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.429 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.429 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:56.429 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.429 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.429 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.429 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.429 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.429 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.429 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.429 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.429 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.689 00:16:56.949 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.949 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.949 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.949 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.949 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.949 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.949 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.949 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.949 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.949 { 00:16:56.949 "cntlid": 37, 00:16:56.949 "qid": 0, 00:16:56.949 "state": "enabled", 00:16:56.949 "thread": "nvmf_tgt_poll_group_000", 00:16:56.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:56.949 "listen_address": { 00:16:56.949 "trtype": "TCP", 00:16:56.949 "adrfam": "IPv4", 00:16:56.949 "traddr": "10.0.0.2", 00:16:56.949 "trsvcid": "4420" 00:16:56.949 }, 00:16:56.949 "peer_address": { 00:16:56.949 "trtype": "TCP", 00:16:56.949 "adrfam": "IPv4", 00:16:56.949 "traddr": "10.0.0.1", 00:16:56.949 "trsvcid": "48498" 00:16:56.949 }, 00:16:56.949 "auth": { 00:16:56.949 "state": "completed", 00:16:56.949 "digest": "sha256", 00:16:56.949 "dhgroup": "ffdhe6144" 00:16:56.949 } 00:16:56.949 } 00:16:56.949 ]' 00:16:56.949 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.949 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.949 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.209 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.209 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.209 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.209 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.209 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.209 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:16:57.209 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:16:57.780 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.040 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.620 00:16:58.620 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.620 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.620 13:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.620 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.620 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.620 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.620 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.620 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.621 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.621 { 00:16:58.621 "cntlid": 39, 00:16:58.621 "qid": 0, 00:16:58.621 "state": "enabled", 00:16:58.621 "thread": "nvmf_tgt_poll_group_000", 00:16:58.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:58.621 "listen_address": { 00:16:58.621 "trtype": "TCP", 00:16:58.621 "adrfam": "IPv4", 00:16:58.621 "traddr": "10.0.0.2", 00:16:58.621 "trsvcid": "4420" 00:16:58.621 }, 00:16:58.621 "peer_address": { 00:16:58.621 "trtype": "TCP", 00:16:58.621 "adrfam": "IPv4", 00:16:58.621 "traddr": "10.0.0.1", 00:16:58.621 "trsvcid": "48536" 00:16:58.621 }, 00:16:58.621 "auth": { 00:16:58.621 "state": "completed", 00:16:58.621 "digest": "sha256", 00:16:58.621 "dhgroup": "ffdhe6144" 00:16:58.621 } 00:16:58.621 } 00:16:58.621 ]' 00:16:58.621 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.621 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.621 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.621 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.621 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.884 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.884 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.884 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.884 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:16:58.884 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:16:59.450 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.450 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.450 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.450 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.450 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.450 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.450 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.450 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.450 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.708 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:59.708 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.708 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.708 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.709 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.709 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.709 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.709 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.709 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.709 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.709 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.709 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.709 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.279 00:17:00.279 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.279 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.279 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.539 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.539 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.539 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.539 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.539 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.539 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.539 { 00:17:00.539 "cntlid": 41, 00:17:00.539 "qid": 0, 00:17:00.539 "state": "enabled", 00:17:00.539 "thread": "nvmf_tgt_poll_group_000", 00:17:00.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:00.540 "listen_address": { 00:17:00.540 "trtype": "TCP", 00:17:00.540 "adrfam": "IPv4", 00:17:00.540 "traddr": "10.0.0.2", 00:17:00.540 "trsvcid": "4420" 00:17:00.540 }, 00:17:00.540 "peer_address": { 00:17:00.540 "trtype": "TCP", 00:17:00.540 "adrfam": "IPv4", 00:17:00.540 "traddr": "10.0.0.1", 00:17:00.540 "trsvcid": "48548" 00:17:00.540 }, 00:17:00.540 "auth": { 00:17:00.540 "state": "completed", 00:17:00.540 "digest": "sha256", 00:17:00.540 "dhgroup": "ffdhe8192" 00:17:00.540 } 00:17:00.540 } 00:17:00.540 ]' 00:17:00.540 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.540 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.540 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.540 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.540 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.540 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.540 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.540 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.815 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:00.815 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:01.384 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.384 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.384 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.384 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.384 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.384 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.384 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.384 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.644 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:01.644 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.644 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.644 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:01.644 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:01.644 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.644 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.644 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.644 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.644 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.644 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.644 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.644 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.905 00:17:01.905 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.905 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.905 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.166 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.166 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.166 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.166 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.166 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.166 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.166 { 00:17:02.166 "cntlid": 43, 00:17:02.166 "qid": 0, 00:17:02.166 "state": "enabled", 00:17:02.166 "thread": "nvmf_tgt_poll_group_000", 00:17:02.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.166 "listen_address": { 00:17:02.166 "trtype": "TCP", 00:17:02.166 "adrfam": "IPv4", 00:17:02.166 "traddr": "10.0.0.2", 00:17:02.166 "trsvcid": "4420" 00:17:02.166 }, 00:17:02.166 "peer_address": { 00:17:02.166 "trtype": "TCP", 00:17:02.166 "adrfam": "IPv4", 00:17:02.166 "traddr": "10.0.0.1", 00:17:02.166 "trsvcid": "57252" 00:17:02.166 }, 00:17:02.166 "auth": { 00:17:02.166 "state": "completed", 00:17:02.166 "digest": "sha256", 00:17:02.166 "dhgroup": "ffdhe8192" 00:17:02.166 } 00:17:02.166 } 00:17:02.166 ]' 00:17:02.166 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.166 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.166 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.166 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.166 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.427 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.427 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.427 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.427 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:02.427 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:02.996 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.996 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.996 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.996 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.996 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.996 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.996 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.996 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.256 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:03.256 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.256 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.256 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.256 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.256 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.256 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.256 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.256 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.256 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.256 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.256 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.256 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.829 00:17:03.829 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.829 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.830 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.091 { 00:17:04.091 "cntlid": 45, 00:17:04.091 "qid": 0, 00:17:04.091 "state": "enabled", 00:17:04.091 "thread": "nvmf_tgt_poll_group_000", 00:17:04.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:04.091 "listen_address": { 00:17:04.091 "trtype": "TCP", 00:17:04.091 "adrfam": "IPv4", 00:17:04.091 "traddr": "10.0.0.2", 00:17:04.091 "trsvcid": "4420" 00:17:04.091 }, 00:17:04.091 "peer_address": { 00:17:04.091 "trtype": "TCP", 00:17:04.091 "adrfam": "IPv4", 00:17:04.091 "traddr": "10.0.0.1", 00:17:04.091 "trsvcid": "57274" 00:17:04.091 }, 00:17:04.091 "auth": { 00:17:04.091 "state": "completed", 00:17:04.091 "digest": "sha256", 00:17:04.091 "dhgroup": "ffdhe8192" 00:17:04.091 } 00:17:04.091 } 00:17:04.091 ]' 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.091 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.351 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:04.351 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:04.921 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.921 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.921 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.921 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.921 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.921 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.921 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:04.921 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.182 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:05.182 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.182 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:05.182 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.182 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.182 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.182 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:05.182 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.182 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.182 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.182 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.182 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.182 13:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.443 00:17:05.704 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.704 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.704 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.704 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.704 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.704 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.704 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.704 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.704 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.704 { 00:17:05.704 "cntlid": 47, 00:17:05.704 "qid": 0, 00:17:05.704 "state": "enabled", 00:17:05.704 "thread": "nvmf_tgt_poll_group_000", 00:17:05.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.704 "listen_address": { 00:17:05.704 "trtype": "TCP", 00:17:05.704 "adrfam": "IPv4", 00:17:05.704 "traddr": "10.0.0.2", 00:17:05.704 "trsvcid": "4420" 00:17:05.704 }, 00:17:05.704 "peer_address": { 00:17:05.704 "trtype": "TCP", 00:17:05.704 "adrfam": "IPv4", 00:17:05.704 "traddr": "10.0.0.1", 00:17:05.704 "trsvcid": "57300" 00:17:05.704 }, 00:17:05.704 "auth": { 00:17:05.704 "state": "completed", 00:17:05.704 "digest": "sha256", 00:17:05.704 "dhgroup": "ffdhe8192" 00:17:05.704 } 00:17:05.704 } 00:17:05.704 ]' 00:17:05.704 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.705 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.705 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.965 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.965 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.965 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.965 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.965 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.226 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:06.226 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.796 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.056 00:17:07.056 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.056 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.056 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.316 { 00:17:07.316 "cntlid": 49, 00:17:07.316 "qid": 0, 00:17:07.316 "state": "enabled", 00:17:07.316 "thread": "nvmf_tgt_poll_group_000", 00:17:07.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:07.316 "listen_address": { 00:17:07.316 "trtype": "TCP", 00:17:07.316 "adrfam": "IPv4", 00:17:07.316 "traddr": "10.0.0.2", 00:17:07.316 "trsvcid": "4420" 00:17:07.316 }, 00:17:07.316 "peer_address": { 00:17:07.316 "trtype": "TCP", 00:17:07.316 "adrfam": "IPv4", 00:17:07.316 "traddr": "10.0.0.1", 00:17:07.316 "trsvcid": "57328" 00:17:07.316 }, 00:17:07.316 "auth": { 00:17:07.316 "state": "completed", 00:17:07.316 "digest": "sha384", 00:17:07.316 "dhgroup": "null" 00:17:07.316 } 00:17:07.316 } 00:17:07.316 ]' 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.316 13:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.576 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:07.576 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:08.145 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.145 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.145 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.145 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.145 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.145 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.145 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.145 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.404 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:08.404 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.404 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.404 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:08.404 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:08.404 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.404 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.404 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.404 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.404 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.404 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.404 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.404 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.663 00:17:08.663 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.663 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.663 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.922 { 00:17:08.922 "cntlid": 51, 00:17:08.922 "qid": 0, 00:17:08.922 "state": "enabled", 00:17:08.922 "thread": "nvmf_tgt_poll_group_000", 00:17:08.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.922 "listen_address": { 00:17:08.922 "trtype": "TCP", 00:17:08.922 "adrfam": "IPv4", 00:17:08.922 "traddr": "10.0.0.2", 00:17:08.922 "trsvcid": "4420" 00:17:08.922 }, 00:17:08.922 "peer_address": { 00:17:08.922 "trtype": "TCP", 00:17:08.922 "adrfam": "IPv4", 00:17:08.922 "traddr": "10.0.0.1", 00:17:08.922 "trsvcid": "57358" 00:17:08.922 }, 00:17:08.922 "auth": { 00:17:08.922 "state": "completed", 00:17:08.922 "digest": "sha384", 00:17:08.922 "dhgroup": "null" 00:17:08.922 } 00:17:08.922 } 00:17:08.922 ]' 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.922 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.181 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:09.181 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:09.749 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.749 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.749 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.749 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.749 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.749 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.749 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:09.749 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.009 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:10.009 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.009 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.009 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:10.009 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:10.009 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.009 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.009 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.009 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.009 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.009 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.009 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.009 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.009 00:17:10.268 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.268 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.268 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.268 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.268 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.268 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.268 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.268 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.268 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.268 { 00:17:10.268 "cntlid": 53, 00:17:10.268 "qid": 0, 00:17:10.268 "state": "enabled", 00:17:10.268 "thread": "nvmf_tgt_poll_group_000", 00:17:10.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.268 "listen_address": { 00:17:10.268 "trtype": "TCP", 00:17:10.268 "adrfam": "IPv4", 00:17:10.268 "traddr": "10.0.0.2", 00:17:10.268 "trsvcid": "4420" 00:17:10.268 }, 00:17:10.268 "peer_address": { 00:17:10.268 "trtype": "TCP", 00:17:10.268 "adrfam": "IPv4", 00:17:10.268 "traddr": "10.0.0.1", 00:17:10.268 "trsvcid": "57374" 00:17:10.268 }, 00:17:10.268 "auth": { 00:17:10.268 "state": "completed", 00:17:10.268 "digest": "sha384", 00:17:10.268 "dhgroup": "null" 00:17:10.268 } 00:17:10.268 } 00:17:10.268 ]' 00:17:10.268 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.526 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.526 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.526 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:10.526 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.526 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.526 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.526 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.785 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:10.785 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.353 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.613 00:17:11.613 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.613 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.613 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.873 { 00:17:11.873 "cntlid": 55, 00:17:11.873 "qid": 0, 00:17:11.873 "state": "enabled", 00:17:11.873 "thread": "nvmf_tgt_poll_group_000", 00:17:11.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:11.873 "listen_address": { 00:17:11.873 "trtype": "TCP", 00:17:11.873 "adrfam": "IPv4", 00:17:11.873 "traddr": "10.0.0.2", 00:17:11.873 "trsvcid": "4420" 00:17:11.873 }, 00:17:11.873 "peer_address": { 00:17:11.873 "trtype": "TCP", 00:17:11.873 "adrfam": "IPv4", 00:17:11.873 "traddr": "10.0.0.1", 00:17:11.873 "trsvcid": "42620" 00:17:11.873 }, 00:17:11.873 "auth": { 00:17:11.873 "state": "completed", 00:17:11.873 "digest": "sha384", 00:17:11.873 "dhgroup": "null" 00:17:11.873 } 00:17:11.873 } 00:17:11.873 ]' 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.873 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.133 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:12.133 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:12.701 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.701 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.701 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.701 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.701 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.701 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.701 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.701 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.701 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.960 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:12.960 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.960 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.960 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:12.960 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.960 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.960 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.960 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.960 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.960 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.960 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.960 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.960 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.219 00:17:13.219 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.219 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.219 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.479 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.479 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.479 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.479 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.479 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.479 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.479 { 00:17:13.479 "cntlid": 57, 00:17:13.479 "qid": 0, 00:17:13.479 "state": "enabled", 00:17:13.479 "thread": "nvmf_tgt_poll_group_000", 00:17:13.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.479 "listen_address": { 00:17:13.479 "trtype": "TCP", 00:17:13.479 "adrfam": "IPv4", 00:17:13.479 "traddr": "10.0.0.2", 00:17:13.479 "trsvcid": "4420" 00:17:13.479 }, 00:17:13.479 "peer_address": { 00:17:13.479 "trtype": "TCP", 00:17:13.479 "adrfam": "IPv4", 00:17:13.479 "traddr": "10.0.0.1", 00:17:13.479 "trsvcid": "42644" 00:17:13.479 }, 00:17:13.479 "auth": { 00:17:13.479 "state": "completed", 00:17:13.479 "digest": "sha384", 00:17:13.479 "dhgroup": "ffdhe2048" 00:17:13.479 } 00:17:13.479 } 00:17:13.479 ]' 00:17:13.479 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.479 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.479 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.479 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.479 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.479 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.479 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.479 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.740 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:13.740 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:14.311 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.311 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.311 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.311 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.311 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.311 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.311 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.311 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.569 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:14.569 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.569 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.569 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:14.569 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:14.569 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.569 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.569 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.569 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.569 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.570 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.570 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.570 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.828 00:17:14.828 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.828 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.828 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.828 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.828 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.828 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.828 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.828 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.828 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.828 { 00:17:14.828 "cntlid": 59, 00:17:14.828 "qid": 0, 00:17:14.828 "state": "enabled", 00:17:14.828 "thread": "nvmf_tgt_poll_group_000", 00:17:14.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:14.828 "listen_address": { 00:17:14.828 "trtype": "TCP", 00:17:14.828 "adrfam": "IPv4", 00:17:14.828 "traddr": "10.0.0.2", 00:17:14.828 "trsvcid": "4420" 00:17:14.828 }, 00:17:14.828 "peer_address": { 00:17:14.828 "trtype": "TCP", 00:17:14.828 "adrfam": "IPv4", 00:17:14.828 "traddr": "10.0.0.1", 00:17:14.828 "trsvcid": "42664" 00:17:14.828 }, 00:17:14.828 "auth": { 00:17:14.828 "state": "completed", 00:17:14.828 "digest": "sha384", 00:17:14.828 "dhgroup": "ffdhe2048" 00:17:14.828 } 00:17:14.828 } 00:17:14.828 ]' 00:17:14.828 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.086 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.086 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.086 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:15.086 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.086 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.086 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.086 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.345 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:15.345 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:15.914 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.914 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.914 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.914 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.914 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.914 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.914 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.914 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.174 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:16.174 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.174 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.174 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:16.174 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:16.174 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.174 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.174 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.174 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.174 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.174 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.174 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.175 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.175 00:17:16.175 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.175 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.175 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.434 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.434 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.434 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.434 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.434 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.434 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.434 { 00:17:16.434 "cntlid": 61, 00:17:16.434 "qid": 0, 00:17:16.434 "state": "enabled", 00:17:16.434 "thread": "nvmf_tgt_poll_group_000", 00:17:16.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.434 "listen_address": { 00:17:16.434 "trtype": "TCP", 00:17:16.434 "adrfam": "IPv4", 00:17:16.434 "traddr": "10.0.0.2", 00:17:16.434 "trsvcid": "4420" 00:17:16.434 }, 00:17:16.434 "peer_address": { 00:17:16.434 "trtype": "TCP", 00:17:16.434 "adrfam": "IPv4", 00:17:16.434 "traddr": "10.0.0.1", 00:17:16.434 "trsvcid": "42684" 00:17:16.434 }, 00:17:16.434 "auth": { 00:17:16.434 "state": "completed", 00:17:16.434 "digest": "sha384", 00:17:16.434 "dhgroup": "ffdhe2048" 00:17:16.434 } 00:17:16.434 } 00:17:16.434 ]' 00:17:16.434 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.434 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.434 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.693 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.693 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.693 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.693 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.693 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.952 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:16.952 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:17.522 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.522 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.522 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.522 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.522 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.522 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.522 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.522 13:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.522 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:17.522 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.522 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.522 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:17.522 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:17.522 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.522 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:17.522 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.522 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.522 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.522 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:17.522 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.522 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.782 00:17:17.782 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.782 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.782 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.044 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.044 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.044 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.044 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.044 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.044 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.044 { 00:17:18.044 "cntlid": 63, 00:17:18.044 "qid": 0, 00:17:18.044 "state": "enabled", 00:17:18.044 "thread": "nvmf_tgt_poll_group_000", 00:17:18.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.044 "listen_address": { 00:17:18.044 "trtype": "TCP", 00:17:18.044 "adrfam": "IPv4", 00:17:18.044 "traddr": "10.0.0.2", 00:17:18.044 "trsvcid": "4420" 00:17:18.044 }, 00:17:18.044 "peer_address": { 00:17:18.044 "trtype": "TCP", 00:17:18.044 "adrfam": "IPv4", 00:17:18.044 "traddr": "10.0.0.1", 00:17:18.044 "trsvcid": "42726" 00:17:18.044 }, 00:17:18.044 "auth": { 00:17:18.044 "state": "completed", 00:17:18.044 "digest": "sha384", 00:17:18.044 "dhgroup": "ffdhe2048" 00:17:18.044 } 00:17:18.044 } 00:17:18.044 ]' 00:17:18.044 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.044 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.044 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.304 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.304 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.304 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.304 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.304 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.304 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:18.304 13:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.243 13:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.503 00:17:19.503 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.503 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.503 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.762 { 00:17:19.762 "cntlid": 65, 00:17:19.762 "qid": 0, 00:17:19.762 "state": "enabled", 00:17:19.762 "thread": "nvmf_tgt_poll_group_000", 00:17:19.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:19.762 "listen_address": { 00:17:19.762 "trtype": "TCP", 00:17:19.762 "adrfam": "IPv4", 00:17:19.762 "traddr": "10.0.0.2", 00:17:19.762 "trsvcid": "4420" 00:17:19.762 }, 00:17:19.762 "peer_address": { 00:17:19.762 "trtype": "TCP", 00:17:19.762 "adrfam": "IPv4", 00:17:19.762 "traddr": "10.0.0.1", 00:17:19.762 "trsvcid": "42750" 00:17:19.762 }, 00:17:19.762 "auth": { 00:17:19.762 "state": "completed", 00:17:19.762 "digest": "sha384", 00:17:19.762 "dhgroup": "ffdhe3072" 00:17:19.762 } 00:17:19.762 } 00:17:19.762 ]' 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.762 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.023 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:20.023 13:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:20.654 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.654 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.654 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.654 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.654 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.654 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.654 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.654 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.938 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.938 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.197 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.197 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.197 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.197 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.197 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.197 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.197 { 00:17:21.197 "cntlid": 67, 00:17:21.197 "qid": 0, 00:17:21.197 "state": "enabled", 00:17:21.197 "thread": "nvmf_tgt_poll_group_000", 00:17:21.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.197 "listen_address": { 00:17:21.197 "trtype": "TCP", 00:17:21.197 "adrfam": "IPv4", 00:17:21.197 "traddr": "10.0.0.2", 00:17:21.197 "trsvcid": "4420" 00:17:21.197 }, 00:17:21.197 "peer_address": { 00:17:21.197 "trtype": "TCP", 00:17:21.197 "adrfam": "IPv4", 00:17:21.197 "traddr": "10.0.0.1", 00:17:21.197 "trsvcid": "46002" 00:17:21.197 }, 00:17:21.197 "auth": { 00:17:21.197 "state": "completed", 00:17:21.197 "digest": "sha384", 00:17:21.197 "dhgroup": "ffdhe3072" 00:17:21.197 } 00:17:21.197 } 00:17:21.197 ]' 00:17:21.197 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.197 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.197 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.456 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:21.457 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.457 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.457 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.457 13:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.457 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:21.457 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:22.027 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.288 13:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.548 00:17:22.548 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.549 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.549 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.808 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.808 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.808 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.808 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.808 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.808 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.808 { 00:17:22.808 "cntlid": 69, 00:17:22.808 "qid": 0, 00:17:22.808 "state": "enabled", 00:17:22.808 "thread": "nvmf_tgt_poll_group_000", 00:17:22.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.808 "listen_address": { 00:17:22.808 "trtype": "TCP", 00:17:22.808 "adrfam": "IPv4", 00:17:22.808 "traddr": "10.0.0.2", 00:17:22.808 "trsvcid": "4420" 00:17:22.808 }, 00:17:22.808 "peer_address": { 00:17:22.808 "trtype": "TCP", 00:17:22.808 "adrfam": "IPv4", 00:17:22.808 "traddr": "10.0.0.1", 00:17:22.808 "trsvcid": "46020" 00:17:22.808 }, 00:17:22.808 "auth": { 00:17:22.808 "state": "completed", 00:17:22.808 "digest": "sha384", 00:17:22.808 "dhgroup": "ffdhe3072" 00:17:22.808 } 00:17:22.808 } 00:17:22.808 ]' 00:17:22.808 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.808 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.808 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.808 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.808 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.068 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.068 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.068 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.068 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:23.068 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:23.637 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.637 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.637 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.637 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.637 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.637 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.637 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.637 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.897 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:23.897 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.897 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.897 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:23.897 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.897 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.897 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:23.897 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.897 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.897 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.897 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.897 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.897 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.158 00:17:24.158 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.158 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.158 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.419 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.419 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.419 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.419 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.419 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.419 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.419 { 00:17:24.419 "cntlid": 71, 00:17:24.419 "qid": 0, 00:17:24.419 "state": "enabled", 00:17:24.419 "thread": "nvmf_tgt_poll_group_000", 00:17:24.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.419 "listen_address": { 00:17:24.419 "trtype": "TCP", 00:17:24.419 "adrfam": "IPv4", 00:17:24.419 "traddr": "10.0.0.2", 00:17:24.419 "trsvcid": "4420" 00:17:24.419 }, 00:17:24.419 "peer_address": { 00:17:24.419 "trtype": "TCP", 00:17:24.419 "adrfam": "IPv4", 00:17:24.419 "traddr": "10.0.0.1", 00:17:24.419 "trsvcid": "46058" 00:17:24.419 }, 00:17:24.419 "auth": { 00:17:24.419 "state": "completed", 00:17:24.419 "digest": "sha384", 00:17:24.419 "dhgroup": "ffdhe3072" 00:17:24.419 } 00:17:24.419 } 00:17:24.419 ]' 00:17:24.419 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.419 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.419 13:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.419 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.419 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.419 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.419 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.419 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.678 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:24.678 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:25.245 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.245 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.245 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.245 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.245 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.245 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.245 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.245 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.246 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.504 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:25.504 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.504 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.504 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.504 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.504 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.504 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.504 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.504 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.504 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.505 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.505 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.505 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.765 00:17:25.765 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.765 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.765 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.026 { 00:17:26.026 "cntlid": 73, 00:17:26.026 "qid": 0, 00:17:26.026 "state": "enabled", 00:17:26.026 "thread": "nvmf_tgt_poll_group_000", 00:17:26.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.026 "listen_address": { 00:17:26.026 "trtype": "TCP", 00:17:26.026 "adrfam": "IPv4", 00:17:26.026 "traddr": "10.0.0.2", 00:17:26.026 "trsvcid": "4420" 00:17:26.026 }, 00:17:26.026 "peer_address": { 00:17:26.026 "trtype": "TCP", 00:17:26.026 "adrfam": "IPv4", 00:17:26.026 "traddr": "10.0.0.1", 00:17:26.026 "trsvcid": "46092" 00:17:26.026 }, 00:17:26.026 "auth": { 00:17:26.026 "state": "completed", 00:17:26.026 "digest": "sha384", 00:17:26.026 "dhgroup": "ffdhe4096" 00:17:26.026 } 00:17:26.026 } 00:17:26.026 ]' 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.286 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:26.286 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:26.858 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.858 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.858 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.858 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.858 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.858 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.858 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.858 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.118 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:27.118 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.118 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.118 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:27.118 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.118 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.118 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.118 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.118 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.118 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.118 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.118 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.118 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.378 00:17:27.378 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.378 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.378 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.638 { 00:17:27.638 "cntlid": 75, 00:17:27.638 "qid": 0, 00:17:27.638 "state": "enabled", 00:17:27.638 "thread": "nvmf_tgt_poll_group_000", 00:17:27.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.638 "listen_address": { 00:17:27.638 "trtype": "TCP", 00:17:27.638 "adrfam": "IPv4", 00:17:27.638 "traddr": "10.0.0.2", 00:17:27.638 "trsvcid": "4420" 00:17:27.638 }, 00:17:27.638 "peer_address": { 00:17:27.638 "trtype": "TCP", 00:17:27.638 "adrfam": "IPv4", 00:17:27.638 "traddr": "10.0.0.1", 00:17:27.638 "trsvcid": "46112" 00:17:27.638 }, 00:17:27.638 "auth": { 00:17:27.638 "state": "completed", 00:17:27.638 "digest": "sha384", 00:17:27.638 "dhgroup": "ffdhe4096" 00:17:27.638 } 00:17:27.638 } 00:17:27.638 ]' 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.638 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.900 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:27.900 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:28.470 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.470 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.470 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.470 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.470 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.470 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.470 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.470 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.728 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:28.728 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.728 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.728 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:28.728 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.728 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.728 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.728 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.728 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.728 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.728 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.728 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.728 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.987 00:17:28.987 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.987 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.987 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.987 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.987 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.987 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.987 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.987 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.987 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.987 { 00:17:28.987 "cntlid": 77, 00:17:28.987 "qid": 0, 00:17:28.987 "state": "enabled", 00:17:28.987 "thread": "nvmf_tgt_poll_group_000", 00:17:28.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.987 "listen_address": { 00:17:28.987 "trtype": "TCP", 00:17:28.987 "adrfam": "IPv4", 00:17:28.987 "traddr": "10.0.0.2", 00:17:28.987 "trsvcid": "4420" 00:17:28.987 }, 00:17:28.987 "peer_address": { 00:17:28.987 "trtype": "TCP", 00:17:28.987 "adrfam": "IPv4", 00:17:28.987 "traddr": "10.0.0.1", 00:17:28.987 "trsvcid": "46124" 00:17:28.987 }, 00:17:28.987 "auth": { 00:17:28.987 "state": "completed", 00:17:28.987 "digest": "sha384", 00:17:28.987 "dhgroup": "ffdhe4096" 00:17:28.987 } 00:17:28.987 } 00:17:28.987 ]' 00:17:29.247 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.247 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.247 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.247 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:29.247 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.247 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.247 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.247 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.507 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:29.507 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:30.075 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.075 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.075 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.075 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.075 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.075 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.075 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:30.076 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:30.336 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:30.336 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.336 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.336 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:30.336 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.336 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.336 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:30.336 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.336 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.336 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.336 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.336 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.336 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.595 00:17:30.595 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.595 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.595 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.595 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.595 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.595 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.596 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.596 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.596 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.596 { 00:17:30.596 "cntlid": 79, 00:17:30.596 "qid": 0, 00:17:30.596 "state": "enabled", 00:17:30.596 "thread": "nvmf_tgt_poll_group_000", 00:17:30.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.596 "listen_address": { 00:17:30.596 "trtype": "TCP", 00:17:30.596 "adrfam": "IPv4", 00:17:30.596 "traddr": "10.0.0.2", 00:17:30.596 "trsvcid": "4420" 00:17:30.596 }, 00:17:30.596 "peer_address": { 00:17:30.596 "trtype": "TCP", 00:17:30.596 "adrfam": "IPv4", 00:17:30.596 "traddr": "10.0.0.1", 00:17:30.596 "trsvcid": "35720" 00:17:30.596 }, 00:17:30.596 "auth": { 00:17:30.596 "state": "completed", 00:17:30.596 "digest": "sha384", 00:17:30.596 "dhgroup": "ffdhe4096" 00:17:30.596 } 00:17:30.596 } 00:17:30.596 ]' 00:17:30.596 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.855 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.855 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.855 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.855 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.855 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.855 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.855 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.115 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:31.115 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:31.683 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.683 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.683 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.683 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.683 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.683 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.683 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.683 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.683 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.978 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:31.978 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.978 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.978 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:31.978 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.978 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.978 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.978 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.978 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.978 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.978 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.978 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.978 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.237 00:17:32.237 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.237 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.237 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.496 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.496 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.497 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.497 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.497 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.497 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.497 { 00:17:32.497 "cntlid": 81, 00:17:32.497 "qid": 0, 00:17:32.497 "state": "enabled", 00:17:32.497 "thread": "nvmf_tgt_poll_group_000", 00:17:32.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.497 "listen_address": { 00:17:32.497 "trtype": "TCP", 00:17:32.497 "adrfam": "IPv4", 00:17:32.497 "traddr": "10.0.0.2", 00:17:32.497 "trsvcid": "4420" 00:17:32.497 }, 00:17:32.497 "peer_address": { 00:17:32.497 "trtype": "TCP", 00:17:32.497 "adrfam": "IPv4", 00:17:32.497 "traddr": "10.0.0.1", 00:17:32.497 "trsvcid": "35750" 00:17:32.497 }, 00:17:32.497 "auth": { 00:17:32.497 "state": "completed", 00:17:32.497 "digest": "sha384", 00:17:32.497 "dhgroup": "ffdhe6144" 00:17:32.497 } 00:17:32.497 } 00:17:32.497 ]' 00:17:32.497 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.497 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.497 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.497 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.497 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.756 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.756 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.756 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.756 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:32.757 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:33.324 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.583 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.583 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.583 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.583 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.583 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.583 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.583 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.583 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:33.583 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.583 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.583 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:33.583 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.583 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.583 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.583 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.583 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.583 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.583 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.583 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.583 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.842 00:17:34.102 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.102 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.102 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.102 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.102 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.102 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.102 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.102 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.102 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.102 { 00:17:34.102 "cntlid": 83, 00:17:34.102 "qid": 0, 00:17:34.102 "state": "enabled", 00:17:34.102 "thread": "nvmf_tgt_poll_group_000", 00:17:34.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.102 "listen_address": { 00:17:34.102 "trtype": "TCP", 00:17:34.102 "adrfam": "IPv4", 00:17:34.102 "traddr": "10.0.0.2", 00:17:34.102 "trsvcid": "4420" 00:17:34.102 }, 00:17:34.102 "peer_address": { 00:17:34.102 "trtype": "TCP", 00:17:34.102 "adrfam": "IPv4", 00:17:34.102 "traddr": "10.0.0.1", 00:17:34.102 "trsvcid": "35774" 00:17:34.102 }, 00:17:34.102 "auth": { 00:17:34.102 "state": "completed", 00:17:34.102 "digest": "sha384", 00:17:34.102 "dhgroup": "ffdhe6144" 00:17:34.102 } 00:17:34.102 } 00:17:34.102 ]' 00:17:34.102 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.102 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.362 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.362 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.362 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.362 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.362 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.362 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.362 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:34.362 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:35.300 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.301 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.560 00:17:35.560 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.560 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.560 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.818 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.818 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.818 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.818 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.839 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.839 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.839 { 00:17:35.839 "cntlid": 85, 00:17:35.839 "qid": 0, 00:17:35.839 "state": "enabled", 00:17:35.839 "thread": "nvmf_tgt_poll_group_000", 00:17:35.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.839 "listen_address": { 00:17:35.839 "trtype": "TCP", 00:17:35.839 "adrfam": "IPv4", 00:17:35.839 "traddr": "10.0.0.2", 00:17:35.839 "trsvcid": "4420" 00:17:35.839 }, 00:17:35.839 "peer_address": { 00:17:35.839 "trtype": "TCP", 00:17:35.839 "adrfam": "IPv4", 00:17:35.839 "traddr": "10.0.0.1", 00:17:35.839 "trsvcid": "35792" 00:17:35.839 }, 00:17:35.839 "auth": { 00:17:35.839 "state": "completed", 00:17:35.839 "digest": "sha384", 00:17:35.839 "dhgroup": "ffdhe6144" 00:17:35.839 } 00:17:35.839 } 00:17:35.839 ]' 00:17:35.839 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.839 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.839 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.839 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.839 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.839 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.839 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.839 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.099 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:36.099 13:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:36.670 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.670 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.670 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.670 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.670 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.670 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.670 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:36.670 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:36.931 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:36.931 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.931 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.931 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:36.931 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.931 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.931 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:36.931 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.931 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.931 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.931 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.931 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.931 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.192 00:17:37.192 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.192 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.192 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.453 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.453 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.453 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.453 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.453 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.453 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.453 { 00:17:37.453 "cntlid": 87, 00:17:37.453 "qid": 0, 00:17:37.453 "state": "enabled", 00:17:37.453 "thread": "nvmf_tgt_poll_group_000", 00:17:37.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.453 "listen_address": { 00:17:37.453 "trtype": "TCP", 00:17:37.453 "adrfam": "IPv4", 00:17:37.453 "traddr": "10.0.0.2", 00:17:37.453 "trsvcid": "4420" 00:17:37.453 }, 00:17:37.453 "peer_address": { 00:17:37.453 "trtype": "TCP", 00:17:37.453 "adrfam": "IPv4", 00:17:37.453 "traddr": "10.0.0.1", 00:17:37.453 "trsvcid": "35816" 00:17:37.453 }, 00:17:37.453 "auth": { 00:17:37.453 "state": "completed", 00:17:37.453 "digest": "sha384", 00:17:37.453 "dhgroup": "ffdhe6144" 00:17:37.453 } 00:17:37.453 } 00:17:37.453 ]' 00:17:37.453 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.453 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.453 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.453 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:37.453 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.453 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.453 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.453 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.714 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:37.714 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:38.285 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.285 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.285 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.285 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.285 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.285 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.285 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.285 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.285 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.549 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:38.549 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.549 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.549 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:38.549 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.549 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.549 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.549 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.549 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.549 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.549 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.549 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.549 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.120 00:17:39.120 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.120 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.120 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.120 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.120 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.120 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.120 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.379 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.379 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.379 { 00:17:39.379 "cntlid": 89, 00:17:39.379 "qid": 0, 00:17:39.379 "state": "enabled", 00:17:39.379 "thread": "nvmf_tgt_poll_group_000", 00:17:39.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.379 "listen_address": { 00:17:39.379 "trtype": "TCP", 00:17:39.379 "adrfam": "IPv4", 00:17:39.379 "traddr": "10.0.0.2", 00:17:39.379 "trsvcid": "4420" 00:17:39.379 }, 00:17:39.379 "peer_address": { 00:17:39.379 "trtype": "TCP", 00:17:39.379 "adrfam": "IPv4", 00:17:39.379 "traddr": "10.0.0.1", 00:17:39.379 "trsvcid": "35842" 00:17:39.379 }, 00:17:39.379 "auth": { 00:17:39.379 "state": "completed", 00:17:39.379 "digest": "sha384", 00:17:39.379 "dhgroup": "ffdhe8192" 00:17:39.379 } 00:17:39.379 } 00:17:39.379 ]' 00:17:39.379 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.379 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.379 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.379 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.379 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.379 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.379 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.379 13:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.639 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:39.639 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:40.209 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.209 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.209 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.209 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.209 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.209 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.209 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.209 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.468 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:40.469 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.469 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.469 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:40.469 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:40.469 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.469 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.469 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.469 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.469 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.469 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.469 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.469 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.728 00:17:40.728 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.728 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.728 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.988 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.988 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.988 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.988 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.988 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.988 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.988 { 00:17:40.988 "cntlid": 91, 00:17:40.988 "qid": 0, 00:17:40.988 "state": "enabled", 00:17:40.988 "thread": "nvmf_tgt_poll_group_000", 00:17:40.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.988 "listen_address": { 00:17:40.988 "trtype": "TCP", 00:17:40.988 "adrfam": "IPv4", 00:17:40.988 "traddr": "10.0.0.2", 00:17:40.988 "trsvcid": "4420" 00:17:40.988 }, 00:17:40.988 "peer_address": { 00:17:40.988 "trtype": "TCP", 00:17:40.988 "adrfam": "IPv4", 00:17:40.988 "traddr": "10.0.0.1", 00:17:40.988 "trsvcid": "45790" 00:17:40.988 }, 00:17:40.988 "auth": { 00:17:40.988 "state": "completed", 00:17:40.988 "digest": "sha384", 00:17:40.988 "dhgroup": "ffdhe8192" 00:17:40.988 } 00:17:40.988 } 00:17:40.988 ]' 00:17:40.988 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.988 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.988 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.247 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.247 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.247 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.247 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.247 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.247 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:41.248 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.188 13:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.758 00:17:42.758 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.758 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.758 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.758 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.758 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.758 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.758 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.758 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.758 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.758 { 00:17:42.758 "cntlid": 93, 00:17:42.758 "qid": 0, 00:17:42.758 "state": "enabled", 00:17:42.758 "thread": "nvmf_tgt_poll_group_000", 00:17:42.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:42.759 "listen_address": { 00:17:42.759 "trtype": "TCP", 00:17:42.759 "adrfam": "IPv4", 00:17:42.759 "traddr": "10.0.0.2", 00:17:42.759 "trsvcid": "4420" 00:17:42.759 }, 00:17:42.759 "peer_address": { 00:17:42.759 "trtype": "TCP", 00:17:42.759 "adrfam": "IPv4", 00:17:42.759 "traddr": "10.0.0.1", 00:17:42.759 "trsvcid": "45810" 00:17:42.759 }, 00:17:42.759 "auth": { 00:17:42.759 "state": "completed", 00:17:42.759 "digest": "sha384", 00:17:42.759 "dhgroup": "ffdhe8192" 00:17:42.759 } 00:17:42.759 } 00:17:42.759 ]' 00:17:42.759 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.759 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.759 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.019 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.019 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.019 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.019 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.019 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.280 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:43.280 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.850 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.851 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:43.851 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.851 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.851 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.851 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.851 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.851 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.420 00:17:44.420 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.420 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.420 13:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.681 { 00:17:44.681 "cntlid": 95, 00:17:44.681 "qid": 0, 00:17:44.681 "state": "enabled", 00:17:44.681 "thread": "nvmf_tgt_poll_group_000", 00:17:44.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.681 "listen_address": { 00:17:44.681 "trtype": "TCP", 00:17:44.681 "adrfam": "IPv4", 00:17:44.681 "traddr": "10.0.0.2", 00:17:44.681 "trsvcid": "4420" 00:17:44.681 }, 00:17:44.681 "peer_address": { 00:17:44.681 "trtype": "TCP", 00:17:44.681 "adrfam": "IPv4", 00:17:44.681 "traddr": "10.0.0.1", 00:17:44.681 "trsvcid": "45828" 00:17:44.681 }, 00:17:44.681 "auth": { 00:17:44.681 "state": "completed", 00:17:44.681 "digest": "sha384", 00:17:44.681 "dhgroup": "ffdhe8192" 00:17:44.681 } 00:17:44.681 } 00:17:44.681 ]' 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.681 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.942 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:44.942 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:45.521 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.521 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.521 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.521 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.521 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.521 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:45.521 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.521 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.521 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.521 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.781 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:45.781 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.781 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.781 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:45.781 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.781 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.781 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.781 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.781 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.781 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.781 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.781 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.781 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.039 00:17:46.039 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.039 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.039 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.299 { 00:17:46.299 "cntlid": 97, 00:17:46.299 "qid": 0, 00:17:46.299 "state": "enabled", 00:17:46.299 "thread": "nvmf_tgt_poll_group_000", 00:17:46.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.299 "listen_address": { 00:17:46.299 "trtype": "TCP", 00:17:46.299 "adrfam": "IPv4", 00:17:46.299 "traddr": "10.0.0.2", 00:17:46.299 "trsvcid": "4420" 00:17:46.299 }, 00:17:46.299 "peer_address": { 00:17:46.299 "trtype": "TCP", 00:17:46.299 "adrfam": "IPv4", 00:17:46.299 "traddr": "10.0.0.1", 00:17:46.299 "trsvcid": "45858" 00:17:46.299 }, 00:17:46.299 "auth": { 00:17:46.299 "state": "completed", 00:17:46.299 "digest": "sha512", 00:17:46.299 "dhgroup": "null" 00:17:46.299 } 00:17:46.299 } 00:17:46.299 ]' 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.299 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.560 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:46.560 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:47.130 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.130 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.130 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.130 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.130 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.130 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.130 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.130 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.389 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:47.389 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.389 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.389 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:47.389 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:47.389 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.389 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.389 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.389 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.389 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.389 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.389 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.389 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.648 00:17:47.648 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.648 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.648 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.648 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.648 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.648 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.648 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.648 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.648 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.648 { 00:17:47.648 "cntlid": 99, 00:17:47.648 "qid": 0, 00:17:47.649 "state": "enabled", 00:17:47.649 "thread": "nvmf_tgt_poll_group_000", 00:17:47.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.649 "listen_address": { 00:17:47.649 "trtype": "TCP", 00:17:47.649 "adrfam": "IPv4", 00:17:47.649 "traddr": "10.0.0.2", 00:17:47.649 "trsvcid": "4420" 00:17:47.649 }, 00:17:47.649 "peer_address": { 00:17:47.649 "trtype": "TCP", 00:17:47.649 "adrfam": "IPv4", 00:17:47.649 "traddr": "10.0.0.1", 00:17:47.649 "trsvcid": "45886" 00:17:47.649 }, 00:17:47.649 "auth": { 00:17:47.649 "state": "completed", 00:17:47.649 "digest": "sha512", 00:17:47.649 "dhgroup": "null" 00:17:47.649 } 00:17:47.649 } 00:17:47.649 ]' 00:17:47.649 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.908 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.908 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.908 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:47.908 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.908 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.908 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.908 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.167 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:48.167 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.736 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.995 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.995 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.996 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.996 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.996 00:17:48.996 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.996 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.996 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.255 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.255 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.255 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.255 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.255 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.255 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.255 { 00:17:49.255 "cntlid": 101, 00:17:49.255 "qid": 0, 00:17:49.255 "state": "enabled", 00:17:49.255 "thread": "nvmf_tgt_poll_group_000", 00:17:49.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.255 "listen_address": { 00:17:49.255 "trtype": "TCP", 00:17:49.255 "adrfam": "IPv4", 00:17:49.255 "traddr": "10.0.0.2", 00:17:49.255 "trsvcid": "4420" 00:17:49.255 }, 00:17:49.255 "peer_address": { 00:17:49.255 "trtype": "TCP", 00:17:49.255 "adrfam": "IPv4", 00:17:49.255 "traddr": "10.0.0.1", 00:17:49.255 "trsvcid": "45900" 00:17:49.255 }, 00:17:49.255 "auth": { 00:17:49.255 "state": "completed", 00:17:49.255 "digest": "sha512", 00:17:49.255 "dhgroup": "null" 00:17:49.255 } 00:17:49.255 } 00:17:49.255 ]' 00:17:49.255 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.256 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.256 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.514 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:49.514 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.514 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.514 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.514 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.514 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:49.514 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.454 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.716 00:17:50.716 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.716 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.716 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.978 { 00:17:50.978 "cntlid": 103, 00:17:50.978 "qid": 0, 00:17:50.978 "state": "enabled", 00:17:50.978 "thread": "nvmf_tgt_poll_group_000", 00:17:50.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.978 "listen_address": { 00:17:50.978 "trtype": "TCP", 00:17:50.978 "adrfam": "IPv4", 00:17:50.978 "traddr": "10.0.0.2", 00:17:50.978 "trsvcid": "4420" 00:17:50.978 }, 00:17:50.978 "peer_address": { 00:17:50.978 "trtype": "TCP", 00:17:50.978 "adrfam": "IPv4", 00:17:50.978 "traddr": "10.0.0.1", 00:17:50.978 "trsvcid": "50682" 00:17:50.978 }, 00:17:50.978 "auth": { 00:17:50.978 "state": "completed", 00:17:50.978 "digest": "sha512", 00:17:50.978 "dhgroup": "null" 00:17:50.978 } 00:17:50.978 } 00:17:50.978 ]' 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.978 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.240 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:51.240 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:51.815 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.815 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.815 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.815 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.815 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.815 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.815 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.815 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.815 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:52.076 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:52.076 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.076 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.076 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:52.076 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:52.076 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.076 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.076 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.076 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.076 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.076 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.076 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.076 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.336 00:17:52.336 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.336 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.336 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.336 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.336 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.336 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.336 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.336 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.336 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.336 { 00:17:52.336 "cntlid": 105, 00:17:52.336 "qid": 0, 00:17:52.336 "state": "enabled", 00:17:52.336 "thread": "nvmf_tgt_poll_group_000", 00:17:52.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.336 "listen_address": { 00:17:52.336 "trtype": "TCP", 00:17:52.336 "adrfam": "IPv4", 00:17:52.336 "traddr": "10.0.0.2", 00:17:52.336 "trsvcid": "4420" 00:17:52.336 }, 00:17:52.336 "peer_address": { 00:17:52.336 "trtype": "TCP", 00:17:52.336 "adrfam": "IPv4", 00:17:52.336 "traddr": "10.0.0.1", 00:17:52.336 "trsvcid": "50702" 00:17:52.336 }, 00:17:52.336 "auth": { 00:17:52.336 "state": "completed", 00:17:52.336 "digest": "sha512", 00:17:52.336 "dhgroup": "ffdhe2048" 00:17:52.336 } 00:17:52.336 } 00:17:52.336 ]' 00:17:52.336 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.597 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.597 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.597 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.597 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.597 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.597 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.597 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.858 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:52.858 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:53.429 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.429 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.429 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.429 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.429 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.429 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.429 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.429 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.429 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:53.429 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.429 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.429 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:53.429 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.429 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.429 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.429 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.429 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.429 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.429 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.429 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.429 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.691 00:17:53.691 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.691 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.691 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.953 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.953 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.953 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.953 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.953 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.953 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.953 { 00:17:53.953 "cntlid": 107, 00:17:53.953 "qid": 0, 00:17:53.953 "state": "enabled", 00:17:53.953 "thread": "nvmf_tgt_poll_group_000", 00:17:53.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.953 "listen_address": { 00:17:53.953 "trtype": "TCP", 00:17:53.953 "adrfam": "IPv4", 00:17:53.953 "traddr": "10.0.0.2", 00:17:53.953 "trsvcid": "4420" 00:17:53.953 }, 00:17:53.953 "peer_address": { 00:17:53.953 "trtype": "TCP", 00:17:53.953 "adrfam": "IPv4", 00:17:53.953 "traddr": "10.0.0.1", 00:17:53.953 "trsvcid": "50736" 00:17:53.953 }, 00:17:53.953 "auth": { 00:17:53.953 "state": "completed", 00:17:53.953 "digest": "sha512", 00:17:53.953 "dhgroup": "ffdhe2048" 00:17:53.953 } 00:17:53.953 } 00:17:53.953 ]' 00:17:53.953 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.953 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.953 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.953 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.953 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.214 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.214 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.214 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.214 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:54.214 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:17:54.785 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.785 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.785 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.785 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.785 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.785 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.785 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.785 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.047 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:55.047 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.047 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.047 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:55.047 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:55.047 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.047 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.047 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.047 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.047 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.047 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.047 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.047 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.308 00:17:55.308 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.308 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.308 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.570 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.570 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.570 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.570 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.570 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.570 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.570 { 00:17:55.570 "cntlid": 109, 00:17:55.570 "qid": 0, 00:17:55.570 "state": "enabled", 00:17:55.570 "thread": "nvmf_tgt_poll_group_000", 00:17:55.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.570 "listen_address": { 00:17:55.570 "trtype": "TCP", 00:17:55.570 "adrfam": "IPv4", 00:17:55.570 "traddr": "10.0.0.2", 00:17:55.570 "trsvcid": "4420" 00:17:55.570 }, 00:17:55.570 "peer_address": { 00:17:55.570 "trtype": "TCP", 00:17:55.570 "adrfam": "IPv4", 00:17:55.570 "traddr": "10.0.0.1", 00:17:55.570 "trsvcid": "50770" 00:17:55.570 }, 00:17:55.570 "auth": { 00:17:55.570 "state": "completed", 00:17:55.570 "digest": "sha512", 00:17:55.570 "dhgroup": "ffdhe2048" 00:17:55.570 } 00:17:55.570 } 00:17:55.570 ]' 00:17:55.570 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.570 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.570 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.570 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.570 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.570 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.571 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.571 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.832 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:55.832 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:17:56.404 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.404 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.404 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.404 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.404 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.404 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.404 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.404 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.666 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:56.666 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.666 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.666 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:56.666 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:56.666 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.666 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:56.666 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.666 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.666 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.666 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.666 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.666 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.927 00:17:56.927 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.927 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.927 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.927 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.187 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.187 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.187 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.187 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.187 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.187 { 00:17:57.187 "cntlid": 111, 00:17:57.187 "qid": 0, 00:17:57.187 "state": "enabled", 00:17:57.187 "thread": "nvmf_tgt_poll_group_000", 00:17:57.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.187 "listen_address": { 00:17:57.187 "trtype": "TCP", 00:17:57.187 "adrfam": "IPv4", 00:17:57.187 "traddr": "10.0.0.2", 00:17:57.187 "trsvcid": "4420" 00:17:57.187 }, 00:17:57.187 "peer_address": { 00:17:57.187 "trtype": "TCP", 00:17:57.187 "adrfam": "IPv4", 00:17:57.187 "traddr": "10.0.0.1", 00:17:57.187 "trsvcid": "50792" 00:17:57.187 }, 00:17:57.187 "auth": { 00:17:57.187 "state": "completed", 00:17:57.187 "digest": "sha512", 00:17:57.187 "dhgroup": "ffdhe2048" 00:17:57.187 } 00:17:57.187 } 00:17:57.187 ]' 00:17:57.187 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.187 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.187 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.187 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.187 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.187 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.187 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.187 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.448 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:57.448 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:17:58.019 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.019 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.019 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.019 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.019 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.019 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.019 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.020 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.020 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.304 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:58.304 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.304 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.304 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:58.304 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.304 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.304 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.304 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.304 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.304 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.304 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.304 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.304 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.304 00:17:58.626 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.626 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.626 13:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.626 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.626 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.626 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.626 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.626 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.626 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.626 { 00:17:58.626 "cntlid": 113, 00:17:58.626 "qid": 0, 00:17:58.626 "state": "enabled", 00:17:58.626 "thread": "nvmf_tgt_poll_group_000", 00:17:58.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.626 "listen_address": { 00:17:58.626 "trtype": "TCP", 00:17:58.626 "adrfam": "IPv4", 00:17:58.626 "traddr": "10.0.0.2", 00:17:58.626 "trsvcid": "4420" 00:17:58.626 }, 00:17:58.626 "peer_address": { 00:17:58.626 "trtype": "TCP", 00:17:58.626 "adrfam": "IPv4", 00:17:58.626 "traddr": "10.0.0.1", 00:17:58.626 "trsvcid": "50806" 00:17:58.626 }, 00:17:58.626 "auth": { 00:17:58.626 "state": "completed", 00:17:58.626 "digest": "sha512", 00:17:58.626 "dhgroup": "ffdhe3072" 00:17:58.626 } 00:17:58.626 } 00:17:58.626 ]' 00:17:58.626 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.626 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.626 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.626 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.626 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.899 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.899 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.899 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.899 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:58.899 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:17:59.467 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.467 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.467 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.467 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.467 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.467 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.467 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:59.467 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:59.727 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:59.727 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.727 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.727 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:59.727 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.727 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.727 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.727 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.727 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.727 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.727 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.727 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.727 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.987 00:17:59.987 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.987 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.987 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.247 { 00:18:00.247 "cntlid": 115, 00:18:00.247 "qid": 0, 00:18:00.247 "state": "enabled", 00:18:00.247 "thread": "nvmf_tgt_poll_group_000", 00:18:00.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.247 "listen_address": { 00:18:00.247 "trtype": "TCP", 00:18:00.247 "adrfam": "IPv4", 00:18:00.247 "traddr": "10.0.0.2", 00:18:00.247 "trsvcid": "4420" 00:18:00.247 }, 00:18:00.247 "peer_address": { 00:18:00.247 "trtype": "TCP", 00:18:00.247 "adrfam": "IPv4", 00:18:00.247 "traddr": "10.0.0.1", 00:18:00.247 "trsvcid": "50832" 00:18:00.247 }, 00:18:00.247 "auth": { 00:18:00.247 "state": "completed", 00:18:00.247 "digest": "sha512", 00:18:00.247 "dhgroup": "ffdhe3072" 00:18:00.247 } 00:18:00.247 } 00:18:00.247 ]' 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.247 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.509 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:18:00.509 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:18:01.081 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.081 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.081 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.081 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.081 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.081 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.081 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.081 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.343 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:01.343 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.343 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.343 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:01.343 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:01.343 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.343 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.343 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.343 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.343 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.343 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.343 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.343 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.605 00:18:01.605 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.605 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.605 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.605 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.605 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.605 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.605 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.867 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.867 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.867 { 00:18:01.867 "cntlid": 117, 00:18:01.867 "qid": 0, 00:18:01.867 "state": "enabled", 00:18:01.867 "thread": "nvmf_tgt_poll_group_000", 00:18:01.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.867 "listen_address": { 00:18:01.867 "trtype": "TCP", 00:18:01.867 "adrfam": "IPv4", 00:18:01.867 "traddr": "10.0.0.2", 00:18:01.867 "trsvcid": "4420" 00:18:01.867 }, 00:18:01.867 "peer_address": { 00:18:01.867 "trtype": "TCP", 00:18:01.867 "adrfam": "IPv4", 00:18:01.867 "traddr": "10.0.0.1", 00:18:01.867 "trsvcid": "34482" 00:18:01.867 }, 00:18:01.867 "auth": { 00:18:01.867 "state": "completed", 00:18:01.867 "digest": "sha512", 00:18:01.867 "dhgroup": "ffdhe3072" 00:18:01.867 } 00:18:01.867 } 00:18:01.867 ]' 00:18:01.867 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.867 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.867 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.867 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.867 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.867 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.867 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.867 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.127 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:18:02.127 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:18:02.698 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.698 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.698 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.698 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.698 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.698 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.698 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.698 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.960 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:02.960 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.960 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.960 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:02.960 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.960 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.960 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:02.960 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.960 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.960 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.960 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.960 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.960 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.221 00:18:03.221 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.221 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.221 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.221 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.221 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.221 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.221 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.221 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.221 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.221 { 00:18:03.221 "cntlid": 119, 00:18:03.221 "qid": 0, 00:18:03.221 "state": "enabled", 00:18:03.221 "thread": "nvmf_tgt_poll_group_000", 00:18:03.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:03.221 "listen_address": { 00:18:03.221 "trtype": "TCP", 00:18:03.221 "adrfam": "IPv4", 00:18:03.221 "traddr": "10.0.0.2", 00:18:03.221 "trsvcid": "4420" 00:18:03.221 }, 00:18:03.221 "peer_address": { 00:18:03.221 "trtype": "TCP", 00:18:03.221 "adrfam": "IPv4", 00:18:03.221 "traddr": "10.0.0.1", 00:18:03.221 "trsvcid": "34494" 00:18:03.221 }, 00:18:03.221 "auth": { 00:18:03.221 "state": "completed", 00:18:03.221 "digest": "sha512", 00:18:03.221 "dhgroup": "ffdhe3072" 00:18:03.221 } 00:18:03.221 } 00:18:03.221 ]' 00:18:03.221 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.482 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.482 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.482 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.482 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.482 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.482 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.482 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.742 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:18:03.743 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.314 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.575 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.575 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.575 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.575 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.575 00:18:04.836 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.836 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.836 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.836 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.836 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.836 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.836 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.836 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.836 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.836 { 00:18:04.836 "cntlid": 121, 00:18:04.836 "qid": 0, 00:18:04.836 "state": "enabled", 00:18:04.836 "thread": "nvmf_tgt_poll_group_000", 00:18:04.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.836 "listen_address": { 00:18:04.836 "trtype": "TCP", 00:18:04.836 "adrfam": "IPv4", 00:18:04.836 "traddr": "10.0.0.2", 00:18:04.836 "trsvcid": "4420" 00:18:04.836 }, 00:18:04.836 "peer_address": { 00:18:04.836 "trtype": "TCP", 00:18:04.836 "adrfam": "IPv4", 00:18:04.836 "traddr": "10.0.0.1", 00:18:04.836 "trsvcid": "34518" 00:18:04.836 }, 00:18:04.836 "auth": { 00:18:04.836 "state": "completed", 00:18:04.836 "digest": "sha512", 00:18:04.836 "dhgroup": "ffdhe4096" 00:18:04.836 } 00:18:04.836 } 00:18:04.836 ]' 00:18:04.836 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.097 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.097 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.097 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.097 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.097 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.097 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.097 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.356 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:18:05.356 13:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:18:05.925 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.925 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.925 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.925 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.925 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.925 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.925 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.925 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.185 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:06.185 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.185 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.185 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:06.185 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:06.185 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.185 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.185 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.185 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.185 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.185 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.185 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.185 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.445 00:18:06.445 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.445 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.445 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.445 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.445 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.445 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.445 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.445 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.445 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.445 { 00:18:06.445 "cntlid": 123, 00:18:06.445 "qid": 0, 00:18:06.445 "state": "enabled", 00:18:06.445 "thread": "nvmf_tgt_poll_group_000", 00:18:06.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.445 "listen_address": { 00:18:06.445 "trtype": "TCP", 00:18:06.445 "adrfam": "IPv4", 00:18:06.445 "traddr": "10.0.0.2", 00:18:06.445 "trsvcid": "4420" 00:18:06.445 }, 00:18:06.445 "peer_address": { 00:18:06.445 "trtype": "TCP", 00:18:06.445 "adrfam": "IPv4", 00:18:06.445 "traddr": "10.0.0.1", 00:18:06.445 "trsvcid": "34546" 00:18:06.445 }, 00:18:06.445 "auth": { 00:18:06.445 "state": "completed", 00:18:06.445 "digest": "sha512", 00:18:06.445 "dhgroup": "ffdhe4096" 00:18:06.445 } 00:18:06.445 } 00:18:06.445 ]' 00:18:06.445 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.705 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.705 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.705 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:06.706 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.706 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.706 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.706 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.967 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:18:06.967 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:18:07.540 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.540 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.540 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.540 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.540 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.540 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.540 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.540 13:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.540 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:07.540 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.540 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.540 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:07.540 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.540 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.540 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.540 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.540 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.540 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.540 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.540 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.540 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.801 00:18:07.801 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.801 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.801 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.062 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.062 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.062 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.062 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.062 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.062 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.062 { 00:18:08.062 "cntlid": 125, 00:18:08.062 "qid": 0, 00:18:08.062 "state": "enabled", 00:18:08.062 "thread": "nvmf_tgt_poll_group_000", 00:18:08.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.062 "listen_address": { 00:18:08.062 "trtype": "TCP", 00:18:08.062 "adrfam": "IPv4", 00:18:08.062 "traddr": "10.0.0.2", 00:18:08.062 "trsvcid": "4420" 00:18:08.062 }, 00:18:08.062 "peer_address": { 00:18:08.062 "trtype": "TCP", 00:18:08.062 "adrfam": "IPv4", 00:18:08.062 "traddr": "10.0.0.1", 00:18:08.062 "trsvcid": "34580" 00:18:08.062 }, 00:18:08.062 "auth": { 00:18:08.062 "state": "completed", 00:18:08.062 "digest": "sha512", 00:18:08.062 "dhgroup": "ffdhe4096" 00:18:08.062 } 00:18:08.062 } 00:18:08.062 ]' 00:18:08.062 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.062 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.062 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.323 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.323 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.323 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.323 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.323 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.323 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:18:08.323 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:18:08.894 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.894 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.894 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.894 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.154 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.154 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.154 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.154 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.154 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:09.154 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.154 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.154 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:09.154 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:09.154 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.155 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:09.155 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.155 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.155 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.155 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:09.155 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.155 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.415 00:18:09.415 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.415 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.415 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.677 { 00:18:09.677 "cntlid": 127, 00:18:09.677 "qid": 0, 00:18:09.677 "state": "enabled", 00:18:09.677 "thread": "nvmf_tgt_poll_group_000", 00:18:09.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.677 "listen_address": { 00:18:09.677 "trtype": "TCP", 00:18:09.677 "adrfam": "IPv4", 00:18:09.677 "traddr": "10.0.0.2", 00:18:09.677 "trsvcid": "4420" 00:18:09.677 }, 00:18:09.677 "peer_address": { 00:18:09.677 "trtype": "TCP", 00:18:09.677 "adrfam": "IPv4", 00:18:09.677 "traddr": "10.0.0.1", 00:18:09.677 "trsvcid": "34600" 00:18:09.677 }, 00:18:09.677 "auth": { 00:18:09.677 "state": "completed", 00:18:09.677 "digest": "sha512", 00:18:09.677 "dhgroup": "ffdhe4096" 00:18:09.677 } 00:18:09.677 } 00:18:09.677 ]' 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.677 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.938 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:18:09.938 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:18:10.507 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.507 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.507 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.507 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.507 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.507 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.507 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.507 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:10.507 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:10.767 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:10.767 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.767 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.767 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:10.767 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.767 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.767 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.767 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.767 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.767 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.767 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.767 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.767 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.027 00:18:11.027 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.027 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.027 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.287 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.287 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.287 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.287 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.287 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.287 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.287 { 00:18:11.287 "cntlid": 129, 00:18:11.287 "qid": 0, 00:18:11.287 "state": "enabled", 00:18:11.287 "thread": "nvmf_tgt_poll_group_000", 00:18:11.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.287 "listen_address": { 00:18:11.287 "trtype": "TCP", 00:18:11.287 "adrfam": "IPv4", 00:18:11.287 "traddr": "10.0.0.2", 00:18:11.287 "trsvcid": "4420" 00:18:11.287 }, 00:18:11.287 "peer_address": { 00:18:11.287 "trtype": "TCP", 00:18:11.287 "adrfam": "IPv4", 00:18:11.287 "traddr": "10.0.0.1", 00:18:11.287 "trsvcid": "35522" 00:18:11.287 }, 00:18:11.287 "auth": { 00:18:11.287 "state": "completed", 00:18:11.287 "digest": "sha512", 00:18:11.287 "dhgroup": "ffdhe6144" 00:18:11.287 } 00:18:11.287 } 00:18:11.287 ]' 00:18:11.287 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.287 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.287 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.287 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.287 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.547 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.547 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.547 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.547 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:18:11.547 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:18:12.117 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.117 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.117 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.117 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.117 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.117 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.117 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.117 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.376 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:12.376 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.376 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.376 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:12.376 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.376 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.376 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.376 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.376 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.376 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.376 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.376 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.376 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.636 00:18:12.636 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.636 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.636 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.896 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.896 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.896 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.896 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.896 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.896 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.896 { 00:18:12.896 "cntlid": 131, 00:18:12.896 "qid": 0, 00:18:12.896 "state": "enabled", 00:18:12.896 "thread": "nvmf_tgt_poll_group_000", 00:18:12.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.896 "listen_address": { 00:18:12.896 "trtype": "TCP", 00:18:12.896 "adrfam": "IPv4", 00:18:12.896 "traddr": "10.0.0.2", 00:18:12.896 "trsvcid": "4420" 00:18:12.896 }, 00:18:12.896 "peer_address": { 00:18:12.896 "trtype": "TCP", 00:18:12.896 "adrfam": "IPv4", 00:18:12.896 "traddr": "10.0.0.1", 00:18:12.896 "trsvcid": "35558" 00:18:12.896 }, 00:18:12.896 "auth": { 00:18:12.896 "state": "completed", 00:18:12.896 "digest": "sha512", 00:18:12.896 "dhgroup": "ffdhe6144" 00:18:12.896 } 00:18:12.896 } 00:18:12.896 ]' 00:18:12.896 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.896 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.896 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.896 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.896 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.157 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.157 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.157 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.157 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:18:13.157 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:18:13.726 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.726 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.726 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.726 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.726 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.726 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.726 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:13.726 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:13.987 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:13.987 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.987 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.987 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:13.987 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:13.987 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.987 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.987 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.987 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.987 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.987 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.987 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.987 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.247 00:18:14.506 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.507 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.507 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.507 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.507 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.507 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.507 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.507 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.507 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.507 { 00:18:14.507 "cntlid": 133, 00:18:14.507 "qid": 0, 00:18:14.507 "state": "enabled", 00:18:14.507 "thread": "nvmf_tgt_poll_group_000", 00:18:14.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.507 "listen_address": { 00:18:14.507 "trtype": "TCP", 00:18:14.507 "adrfam": "IPv4", 00:18:14.507 "traddr": "10.0.0.2", 00:18:14.507 "trsvcid": "4420" 00:18:14.507 }, 00:18:14.507 "peer_address": { 00:18:14.507 "trtype": "TCP", 00:18:14.507 "adrfam": "IPv4", 00:18:14.507 "traddr": "10.0.0.1", 00:18:14.507 "trsvcid": "35584" 00:18:14.507 }, 00:18:14.507 "auth": { 00:18:14.507 "state": "completed", 00:18:14.507 "digest": "sha512", 00:18:14.507 "dhgroup": "ffdhe6144" 00:18:14.507 } 00:18:14.507 } 00:18:14.507 ]' 00:18:14.507 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.507 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.507 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.767 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:14.767 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.767 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.767 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.767 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.767 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:18:14.767 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:18:15.707 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.707 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.967 00:18:15.967 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.967 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.967 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.227 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.227 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.227 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.227 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.227 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.227 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.227 { 00:18:16.227 "cntlid": 135, 00:18:16.227 "qid": 0, 00:18:16.227 "state": "enabled", 00:18:16.228 "thread": "nvmf_tgt_poll_group_000", 00:18:16.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.228 "listen_address": { 00:18:16.228 "trtype": "TCP", 00:18:16.228 "adrfam": "IPv4", 00:18:16.228 "traddr": "10.0.0.2", 00:18:16.228 "trsvcid": "4420" 00:18:16.228 }, 00:18:16.228 "peer_address": { 00:18:16.228 "trtype": "TCP", 00:18:16.228 "adrfam": "IPv4", 00:18:16.228 "traddr": "10.0.0.1", 00:18:16.228 "trsvcid": "35604" 00:18:16.228 }, 00:18:16.228 "auth": { 00:18:16.228 "state": "completed", 00:18:16.228 "digest": "sha512", 00:18:16.228 "dhgroup": "ffdhe6144" 00:18:16.228 } 00:18:16.228 } 00:18:16.228 ]' 00:18:16.228 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.228 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.228 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.228 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.228 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.489 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.489 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.489 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.489 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:18:16.489 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:18:17.059 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.060 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.060 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.060 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.060 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.060 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.060 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.060 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.060 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.320 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:17.320 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.320 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.320 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:17.320 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:17.320 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.320 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.320 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.320 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.320 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.320 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.320 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.320 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.892 00:18:17.892 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.892 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.892 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.153 { 00:18:18.153 "cntlid": 137, 00:18:18.153 "qid": 0, 00:18:18.153 "state": "enabled", 00:18:18.153 "thread": "nvmf_tgt_poll_group_000", 00:18:18.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.153 "listen_address": { 00:18:18.153 "trtype": "TCP", 00:18:18.153 "adrfam": "IPv4", 00:18:18.153 "traddr": "10.0.0.2", 00:18:18.153 "trsvcid": "4420" 00:18:18.153 }, 00:18:18.153 "peer_address": { 00:18:18.153 "trtype": "TCP", 00:18:18.153 "adrfam": "IPv4", 00:18:18.153 "traddr": "10.0.0.1", 00:18:18.153 "trsvcid": "35638" 00:18:18.153 }, 00:18:18.153 "auth": { 00:18:18.153 "state": "completed", 00:18:18.153 "digest": "sha512", 00:18:18.153 "dhgroup": "ffdhe8192" 00:18:18.153 } 00:18:18.153 } 00:18:18.153 ]' 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.153 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.413 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:18:18.413 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:18:18.982 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.982 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.982 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.982 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.982 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.982 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.982 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.982 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.242 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:19.242 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.242 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.242 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:19.242 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:19.242 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.242 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.242 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.242 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.242 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.242 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.242 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.242 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.502 00:18:19.502 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.502 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.502 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.762 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.762 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.762 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.762 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.762 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.762 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.762 { 00:18:19.762 "cntlid": 139, 00:18:19.762 "qid": 0, 00:18:19.762 "state": "enabled", 00:18:19.762 "thread": "nvmf_tgt_poll_group_000", 00:18:19.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.762 "listen_address": { 00:18:19.762 "trtype": "TCP", 00:18:19.762 "adrfam": "IPv4", 00:18:19.762 "traddr": "10.0.0.2", 00:18:19.762 "trsvcid": "4420" 00:18:19.762 }, 00:18:19.762 "peer_address": { 00:18:19.762 "trtype": "TCP", 00:18:19.762 "adrfam": "IPv4", 00:18:19.762 "traddr": "10.0.0.1", 00:18:19.762 "trsvcid": "35660" 00:18:19.762 }, 00:18:19.762 "auth": { 00:18:19.762 "state": "completed", 00:18:19.762 "digest": "sha512", 00:18:19.762 "dhgroup": "ffdhe8192" 00:18:19.762 } 00:18:19.762 } 00:18:19.762 ]' 00:18:19.762 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.762 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.762 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.022 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.022 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.022 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.022 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.022 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.022 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:18:20.022 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: --dhchap-ctrl-secret DHHC-1:02:ZDExMzM3ZWE4YzRiZjhjOTBlM2MzZGU2ODY4MjY1NmEwZTBkODFiZWZkODI0ODI5IcwMog==: 00:18:20.592 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.851 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.422 00:18:21.422 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.422 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.422 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.682 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.682 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.682 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.682 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.682 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.682 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.682 { 00:18:21.682 "cntlid": 141, 00:18:21.682 "qid": 0, 00:18:21.682 "state": "enabled", 00:18:21.682 "thread": "nvmf_tgt_poll_group_000", 00:18:21.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.682 "listen_address": { 00:18:21.682 "trtype": "TCP", 00:18:21.682 "adrfam": "IPv4", 00:18:21.682 "traddr": "10.0.0.2", 00:18:21.682 "trsvcid": "4420" 00:18:21.682 }, 00:18:21.682 "peer_address": { 00:18:21.682 "trtype": "TCP", 00:18:21.682 "adrfam": "IPv4", 00:18:21.682 "traddr": "10.0.0.1", 00:18:21.682 "trsvcid": "37204" 00:18:21.682 }, 00:18:21.682 "auth": { 00:18:21.682 "state": "completed", 00:18:21.682 "digest": "sha512", 00:18:21.682 "dhgroup": "ffdhe8192" 00:18:21.683 } 00:18:21.683 } 00:18:21.683 ]' 00:18:21.683 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.683 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.683 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.683 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.683 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.683 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.683 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.683 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.944 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:18:21.944 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:01:YTgxZTVmOWQ0NjljNmVkZWYyYTUyM2JiZjAwNDFhZTimiYEK: 00:18:22.515 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.515 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.515 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.515 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.515 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.515 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.516 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.516 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.776 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:22.776 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.776 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.776 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.776 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:22.776 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.776 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:22.776 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.776 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.776 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.776 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.776 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.776 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.037 00:18:23.297 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.297 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.297 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.297 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.297 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.297 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.297 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.297 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.297 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.297 { 00:18:23.297 "cntlid": 143, 00:18:23.297 "qid": 0, 00:18:23.297 "state": "enabled", 00:18:23.297 "thread": "nvmf_tgt_poll_group_000", 00:18:23.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.297 "listen_address": { 00:18:23.297 "trtype": "TCP", 00:18:23.297 "adrfam": "IPv4", 00:18:23.297 "traddr": "10.0.0.2", 00:18:23.297 "trsvcid": "4420" 00:18:23.297 }, 00:18:23.297 "peer_address": { 00:18:23.297 "trtype": "TCP", 00:18:23.297 "adrfam": "IPv4", 00:18:23.297 "traddr": "10.0.0.1", 00:18:23.297 "trsvcid": "37238" 00:18:23.297 }, 00:18:23.297 "auth": { 00:18:23.297 "state": "completed", 00:18:23.297 "digest": "sha512", 00:18:23.297 "dhgroup": "ffdhe8192" 00:18:23.297 } 00:18:23.297 } 00:18:23.297 ]' 00:18:23.297 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.297 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.297 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.558 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.558 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.558 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.558 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.558 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.558 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:18:23.818 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:18:24.388 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.388 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.388 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.388 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.388 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.388 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:24.388 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:24.388 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:24.388 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.388 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.388 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.388 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:24.388 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.388 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.388 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.388 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:24.388 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.388 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.388 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.388 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.388 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.388 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.388 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.388 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.959 00:18:24.959 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.959 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.959 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.219 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.219 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.219 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.219 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.219 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.219 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.219 { 00:18:25.219 "cntlid": 145, 00:18:25.219 "qid": 0, 00:18:25.219 "state": "enabled", 00:18:25.219 "thread": "nvmf_tgt_poll_group_000", 00:18:25.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:25.219 "listen_address": { 00:18:25.219 "trtype": "TCP", 00:18:25.219 "adrfam": "IPv4", 00:18:25.219 "traddr": "10.0.0.2", 00:18:25.219 "trsvcid": "4420" 00:18:25.219 }, 00:18:25.219 "peer_address": { 00:18:25.219 "trtype": "TCP", 00:18:25.219 "adrfam": "IPv4", 00:18:25.219 "traddr": "10.0.0.1", 00:18:25.219 "trsvcid": "37266" 00:18:25.219 }, 00:18:25.220 "auth": { 00:18:25.220 "state": "completed", 00:18:25.220 "digest": "sha512", 00:18:25.220 "dhgroup": "ffdhe8192" 00:18:25.220 } 00:18:25.220 } 00:18:25.220 ]' 00:18:25.220 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.220 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.220 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.220 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.220 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.220 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.220 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.220 13:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.480 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:18:25.480 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NWE5OWRhMjA4Y2QwYzk3ODczN2VmNGMwYjI4OWZkMjA1NDc2MjJhNmNhY2Y0ZjZjwZZloA==: --dhchap-ctrl-secret DHHC-1:03:MWUxOWEzNTY1MjQ4MmJlOWVkOWVkOTA1NTgxMzAyYzllNzdiZmFiN2Y2NTU2OGI0NDYwYmU5ZTQzOWYyOWVmN+uGpDA=: 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:26.051 13:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:26.623 request: 00:18:26.623 { 00:18:26.623 "name": "nvme0", 00:18:26.623 "trtype": "tcp", 00:18:26.623 "traddr": "10.0.0.2", 00:18:26.623 "adrfam": "ipv4", 00:18:26.623 "trsvcid": "4420", 00:18:26.623 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.623 "prchk_reftag": false, 00:18:26.623 "prchk_guard": false, 00:18:26.623 "hdgst": false, 00:18:26.623 "ddgst": false, 00:18:26.623 "dhchap_key": "key2", 00:18:26.623 "allow_unrecognized_csi": false, 00:18:26.623 "method": "bdev_nvme_attach_controller", 00:18:26.623 "req_id": 1 00:18:26.623 } 00:18:26.623 Got JSON-RPC error response 00:18:26.623 response: 00:18:26.623 { 00:18:26.623 "code": -5, 00:18:26.623 "message": "Input/output error" 00:18:26.623 } 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.623 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:26.624 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.624 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.624 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.624 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:26.885 request: 00:18:26.885 { 00:18:26.885 "name": "nvme0", 00:18:26.885 "trtype": "tcp", 00:18:26.885 "traddr": "10.0.0.2", 00:18:26.885 "adrfam": "ipv4", 00:18:26.885 "trsvcid": "4420", 00:18:26.885 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.885 "prchk_reftag": false, 00:18:26.885 "prchk_guard": false, 00:18:26.885 "hdgst": false, 00:18:26.885 "ddgst": false, 00:18:26.885 "dhchap_key": "key1", 00:18:26.885 "dhchap_ctrlr_key": "ckey2", 00:18:26.885 "allow_unrecognized_csi": false, 00:18:26.885 "method": "bdev_nvme_attach_controller", 00:18:26.885 "req_id": 1 00:18:26.885 } 00:18:26.885 Got JSON-RPC error response 00:18:26.885 response: 00:18:26.885 { 00:18:26.885 "code": -5, 00:18:26.885 "message": "Input/output error" 00:18:26.885 } 00:18:27.145 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:27.145 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.145 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.145 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.145 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.146 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.405 request: 00:18:27.405 { 00:18:27.405 "name": "nvme0", 00:18:27.405 "trtype": "tcp", 00:18:27.405 "traddr": "10.0.0.2", 00:18:27.405 "adrfam": "ipv4", 00:18:27.405 "trsvcid": "4420", 00:18:27.405 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:27.405 "prchk_reftag": false, 00:18:27.405 "prchk_guard": false, 00:18:27.405 "hdgst": false, 00:18:27.405 "ddgst": false, 00:18:27.405 "dhchap_key": "key1", 00:18:27.405 "dhchap_ctrlr_key": "ckey1", 00:18:27.405 "allow_unrecognized_csi": false, 00:18:27.405 "method": "bdev_nvme_attach_controller", 00:18:27.405 "req_id": 1 00:18:27.405 } 00:18:27.405 Got JSON-RPC error response 00:18:27.405 response: 00:18:27.405 { 00:18:27.405 "code": -5, 00:18:27.405 "message": "Input/output error" 00:18:27.405 } 00:18:27.405 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:27.405 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.405 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.405 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.405 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.405 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.405 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.405 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.405 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2122187 00:18:27.406 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2122187 ']' 00:18:27.406 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2122187 00:18:27.406 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:27.406 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.406 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2122187 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2122187' 00:18:27.666 killing process with pid 2122187 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2122187 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2122187 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2147252 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2147252 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2147252 ']' 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.666 13:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.620 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2147252 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2147252 ']' 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.621 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.881 null0 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vaQ 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.4ss ]] 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4ss 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.UPK 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.edy ]] 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.edy 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.zlI 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.m9i ]] 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.m9i 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8H0 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.881 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.822 nvme0n1 00:18:29.822 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.822 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.822 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.822 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.822 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.822 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.822 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.822 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.822 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.822 { 00:18:29.822 "cntlid": 1, 00:18:29.822 "qid": 0, 00:18:29.822 "state": "enabled", 00:18:29.822 "thread": "nvmf_tgt_poll_group_000", 00:18:29.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.822 "listen_address": { 00:18:29.822 "trtype": "TCP", 00:18:29.822 "adrfam": "IPv4", 00:18:29.822 "traddr": "10.0.0.2", 00:18:29.822 "trsvcid": "4420" 00:18:29.822 }, 00:18:29.822 "peer_address": { 00:18:29.822 "trtype": "TCP", 00:18:29.822 "adrfam": "IPv4", 00:18:29.822 "traddr": "10.0.0.1", 00:18:29.822 "trsvcid": "37330" 00:18:29.822 }, 00:18:29.822 "auth": { 00:18:29.822 "state": "completed", 00:18:29.822 "digest": "sha512", 00:18:29.822 "dhgroup": "ffdhe8192" 00:18:29.822 } 00:18:29.822 } 00:18:29.822 ]' 00:18:29.822 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.822 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.822 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.082 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.082 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.082 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.082 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.082 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.342 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:18:30.342 13:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.913 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.173 request: 00:18:31.173 { 00:18:31.173 "name": "nvme0", 00:18:31.173 "trtype": "tcp", 00:18:31.173 "traddr": "10.0.0.2", 00:18:31.173 "adrfam": "ipv4", 00:18:31.173 "trsvcid": "4420", 00:18:31.173 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.173 "prchk_reftag": false, 00:18:31.173 "prchk_guard": false, 00:18:31.173 "hdgst": false, 00:18:31.173 "ddgst": false, 00:18:31.174 "dhchap_key": "key3", 00:18:31.174 "allow_unrecognized_csi": false, 00:18:31.174 "method": "bdev_nvme_attach_controller", 00:18:31.174 "req_id": 1 00:18:31.174 } 00:18:31.174 Got JSON-RPC error response 00:18:31.174 response: 00:18:31.174 { 00:18:31.174 "code": -5, 00:18:31.174 "message": "Input/output error" 00:18:31.174 } 00:18:31.174 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:31.174 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:31.174 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:31.174 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:31.174 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:31.174 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:31.174 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:31.174 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:31.434 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:31.434 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:31.434 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:31.434 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:31.434 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.434 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:31.434 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.434 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:31.434 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.434 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.434 request: 00:18:31.434 { 00:18:31.434 "name": "nvme0", 00:18:31.434 "trtype": "tcp", 00:18:31.434 "traddr": "10.0.0.2", 00:18:31.434 "adrfam": "ipv4", 00:18:31.434 "trsvcid": "4420", 00:18:31.434 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.434 "prchk_reftag": false, 00:18:31.434 "prchk_guard": false, 00:18:31.434 "hdgst": false, 00:18:31.434 "ddgst": false, 00:18:31.434 "dhchap_key": "key3", 00:18:31.434 "allow_unrecognized_csi": false, 00:18:31.434 "method": "bdev_nvme_attach_controller", 00:18:31.434 "req_id": 1 00:18:31.434 } 00:18:31.434 Got JSON-RPC error response 00:18:31.434 response: 00:18:31.434 { 00:18:31.434 "code": -5, 00:18:31.434 "message": "Input/output error" 00:18:31.434 } 00:18:31.434 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:31.434 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:31.434 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:31.434 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:31.434 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:31.434 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:31.434 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:31.434 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:31.434 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:31.434 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.695 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.696 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.696 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.956 request: 00:18:31.956 { 00:18:31.956 "name": "nvme0", 00:18:31.956 "trtype": "tcp", 00:18:31.956 "traddr": "10.0.0.2", 00:18:31.956 "adrfam": "ipv4", 00:18:31.956 "trsvcid": "4420", 00:18:31.956 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.956 "prchk_reftag": false, 00:18:31.956 "prchk_guard": false, 00:18:31.956 "hdgst": false, 00:18:31.956 "ddgst": false, 00:18:31.956 "dhchap_key": "key0", 00:18:31.956 "dhchap_ctrlr_key": "key1", 00:18:31.956 "allow_unrecognized_csi": false, 00:18:31.956 "method": "bdev_nvme_attach_controller", 00:18:31.956 "req_id": 1 00:18:31.956 } 00:18:31.956 Got JSON-RPC error response 00:18:31.956 response: 00:18:31.956 { 00:18:31.956 "code": -5, 00:18:31.956 "message": "Input/output error" 00:18:31.956 } 00:18:31.956 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:31.956 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:31.956 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:31.956 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:31.956 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:31.956 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:31.956 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:32.216 nvme0n1 00:18:32.216 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:32.216 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:32.216 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.477 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.477 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.477 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.737 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:32.737 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.737 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.737 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.737 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:32.737 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:32.737 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:33.307 nvme0n1 00:18:33.567 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:33.567 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:33.567 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.567 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.567 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:33.567 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.567 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.567 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.567 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:33.567 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.567 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:33.827 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.827 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:18:33.828 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: --dhchap-ctrl-secret DHHC-1:03:NjA1OGY4YjJmMjdjOWNjZmE4N2I5NWQwOGZiYjBmYmFlYTcxMjFlNGM3OGYyOGExZWI4MzVhNDMzNjA2ZjdkMNjtvUc=: 00:18:34.398 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:34.398 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:34.398 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:34.398 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:34.398 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:34.398 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:34.398 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:34.398 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.398 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.658 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:34.658 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:34.658 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:34.658 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:34.658 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.658 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:34.658 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:34.658 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:34.658 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:34.658 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:34.919 request: 00:18:34.919 { 00:18:34.919 "name": "nvme0", 00:18:34.919 "trtype": "tcp", 00:18:34.919 "traddr": "10.0.0.2", 00:18:34.919 "adrfam": "ipv4", 00:18:34.919 "trsvcid": "4420", 00:18:34.919 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:34.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.919 "prchk_reftag": false, 00:18:34.919 "prchk_guard": false, 00:18:34.919 "hdgst": false, 00:18:34.919 "ddgst": false, 00:18:34.919 "dhchap_key": "key1", 00:18:34.919 "allow_unrecognized_csi": false, 00:18:34.919 "method": "bdev_nvme_attach_controller", 00:18:34.919 "req_id": 1 00:18:34.919 } 00:18:34.919 Got JSON-RPC error response 00:18:34.919 response: 00:18:34.919 { 00:18:34.919 "code": -5, 00:18:34.919 "message": "Input/output error" 00:18:34.919 } 00:18:34.919 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:34.919 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:34.919 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:34.919 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:34.919 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:34.919 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:34.919 13:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.861 nvme0n1 00:18:35.861 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:35.861 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:35.861 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.861 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.861 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.861 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.135 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.135 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.135 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.135 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.135 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:36.135 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:36.135 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:36.397 nvme0n1 00:18:36.397 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:36.397 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:36.397 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: '' 2s 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: ]] 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZmRjNGRmYWEyNmY2MGY5YjYxM2FjMDUxMjA3NGExZjJ0RGGL: 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:36.710 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: 2s 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: ]] 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NGVjZjkzOTg2OGZjOWUxZTc4ZWI4MmFhNzE2MjlkOTE3NDk2YTc2NTVhNDlhNTFhD8R5kQ==: 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:39.315 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:41.224 13:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:41.794 nvme0n1 00:18:41.794 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:41.794 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.794 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.794 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.794 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:41.794 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:42.055 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:42.055 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:42.055 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.316 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.316 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.316 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.316 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.316 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.316 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:42.316 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:42.598 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:43.170 request: 00:18:43.170 { 00:18:43.170 "name": "nvme0", 00:18:43.170 "dhchap_key": "key1", 00:18:43.170 "dhchap_ctrlr_key": "key3", 00:18:43.170 "method": "bdev_nvme_set_keys", 00:18:43.170 "req_id": 1 00:18:43.170 } 00:18:43.170 Got JSON-RPC error response 00:18:43.170 response: 00:18:43.170 { 00:18:43.170 "code": -13, 00:18:43.170 "message": "Permission denied" 00:18:43.170 } 00:18:43.170 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:43.170 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:43.170 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:43.170 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:43.170 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:43.170 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:43.170 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.430 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:43.430 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:44.370 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:44.370 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:44.370 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.370 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:44.370 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:44.370 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.370 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.629 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.629 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:44.629 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:44.629 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:45.197 nvme0n1 00:18:45.198 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:45.198 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.198 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.198 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.198 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:45.198 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:45.198 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:45.198 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:45.198 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.198 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:45.198 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.198 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:45.198 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:45.767 request: 00:18:45.767 { 00:18:45.767 "name": "nvme0", 00:18:45.767 "dhchap_key": "key2", 00:18:45.767 "dhchap_ctrlr_key": "key0", 00:18:45.767 "method": "bdev_nvme_set_keys", 00:18:45.767 "req_id": 1 00:18:45.767 } 00:18:45.767 Got JSON-RPC error response 00:18:45.767 response: 00:18:45.767 { 00:18:45.767 "code": -13, 00:18:45.767 "message": "Permission denied" 00:18:45.767 } 00:18:45.767 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:45.767 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.767 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.767 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.767 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:45.767 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:45.767 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.767 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:45.767 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:47.149 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:47.149 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:47.149 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.149 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:47.149 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:47.149 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:47.150 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2122431 00:18:47.150 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2122431 ']' 00:18:47.150 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2122431 00:18:47.150 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:47.150 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.150 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2122431 00:18:47.150 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:47.150 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:47.150 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2122431' 00:18:47.150 killing process with pid 2122431 00:18:47.150 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2122431 00:18:47.150 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2122431 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:47.411 rmmod nvme_tcp 00:18:47.411 rmmod nvme_fabrics 00:18:47.411 rmmod nvme_keyring 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2147252 ']' 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2147252 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2147252 ']' 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2147252 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.411 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2147252 00:18:47.411 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.411 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.411 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2147252' 00:18:47.411 killing process with pid 2147252 00:18:47.411 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2147252 00:18:47.411 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2147252 00:18:47.672 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:47.672 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:47.672 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:47.672 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:47.672 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:47.672 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:47.672 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:47.672 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:47.672 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:47.672 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.672 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:47.672 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.592 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:49.592 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vaQ /tmp/spdk.key-sha256.UPK /tmp/spdk.key-sha384.zlI /tmp/spdk.key-sha512.8H0 /tmp/spdk.key-sha512.4ss /tmp/spdk.key-sha384.edy /tmp/spdk.key-sha256.m9i '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:49.592 00:18:49.592 real 2m32.664s 00:18:49.592 user 5m44.208s 00:18:49.592 sys 0m22.019s 00:18:49.592 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.592 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.592 ************************************ 00:18:49.592 END TEST nvmf_auth_target 00:18:49.592 ************************************ 00:18:49.592 13:26:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:49.592 13:26:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:49.592 13:26:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:49.592 13:26:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.592 13:26:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:49.854 ************************************ 00:18:49.854 START TEST nvmf_bdevio_no_huge 00:18:49.854 ************************************ 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:49.854 * Looking for test storage... 00:18:49.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:49.854 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:49.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.855 --rc genhtml_branch_coverage=1 00:18:49.855 --rc genhtml_function_coverage=1 00:18:49.855 --rc genhtml_legend=1 00:18:49.855 --rc geninfo_all_blocks=1 00:18:49.855 --rc geninfo_unexecuted_blocks=1 00:18:49.855 00:18:49.855 ' 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:49.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.855 --rc genhtml_branch_coverage=1 00:18:49.855 --rc genhtml_function_coverage=1 00:18:49.855 --rc genhtml_legend=1 00:18:49.855 --rc geninfo_all_blocks=1 00:18:49.855 --rc geninfo_unexecuted_blocks=1 00:18:49.855 00:18:49.855 ' 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:49.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.855 --rc genhtml_branch_coverage=1 00:18:49.855 --rc genhtml_function_coverage=1 00:18:49.855 --rc genhtml_legend=1 00:18:49.855 --rc geninfo_all_blocks=1 00:18:49.855 --rc geninfo_unexecuted_blocks=1 00:18:49.855 00:18:49.855 ' 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:49.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.855 --rc genhtml_branch_coverage=1 00:18:49.855 --rc genhtml_function_coverage=1 00:18:49.855 --rc genhtml_legend=1 00:18:49.855 --rc geninfo_all_blocks=1 00:18:49.855 --rc geninfo_unexecuted_blocks=1 00:18:49.855 00:18:49.855 ' 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.855 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:50.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:50.117 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:58.259 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:58.259 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:58.259 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:58.259 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.259 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.260 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:58.260 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:58.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:18:58.260 00:18:58.260 --- 10.0.0.2 ping statistics --- 00:18:58.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.260 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:18:58.260 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:18:58.260 00:18:58.260 --- 10.0.0.1 ping statistics --- 00:18:58.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.260 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:18:58.260 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.260 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:58.260 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:58.260 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.260 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:58.260 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:58.260 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.260 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:58.260 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:58.260 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:58.260 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:58.260 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.260 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.260 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2155409 00:18:58.260 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2155409 00:18:58.260 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:58.260 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2155409 ']' 00:18:58.260 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.260 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.260 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.260 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.260 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.260 [2024-12-06 13:26:44.084321] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:18:58.260 [2024-12-06 13:26:44.084388] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:58.260 [2024-12-06 13:26:44.193096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.260 [2024-12-06 13:26:44.253442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.260 [2024-12-06 13:26:44.253498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.260 [2024-12-06 13:26:44.253508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.260 [2024-12-06 13:26:44.253515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.260 [2024-12-06 13:26:44.253521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.260 [2024-12-06 13:26:44.255010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:58.260 [2024-12-06 13:26:44.255265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:58.260 [2024-12-06 13:26:44.255527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:58.260 [2024-12-06 13:26:44.255648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.521 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.521 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:58.521 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:58.521 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.521 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.522 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.522 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:58.522 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.522 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.522 [2024-12-06 13:26:44.968832] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.522 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.522 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:58.522 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.522 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.522 Malloc0 00:18:58.522 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.522 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:58.522 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.522 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.522 [2024-12-06 13:26:45.022961] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:58.522 { 00:18:58.522 "params": { 00:18:58.522 "name": "Nvme$subsystem", 00:18:58.522 "trtype": "$TEST_TRANSPORT", 00:18:58.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.522 "adrfam": "ipv4", 00:18:58.522 "trsvcid": "$NVMF_PORT", 00:18:58.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.522 "hdgst": ${hdgst:-false}, 00:18:58.522 "ddgst": ${ddgst:-false} 00:18:58.522 }, 00:18:58.522 "method": "bdev_nvme_attach_controller" 00:18:58.522 } 00:18:58.522 EOF 00:18:58.522 )") 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:58.522 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:58.522 "params": { 00:18:58.522 "name": "Nvme1", 00:18:58.522 "trtype": "tcp", 00:18:58.522 "traddr": "10.0.0.2", 00:18:58.522 "adrfam": "ipv4", 00:18:58.522 "trsvcid": "4420", 00:18:58.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.522 "hdgst": false, 00:18:58.522 "ddgst": false 00:18:58.522 }, 00:18:58.522 "method": "bdev_nvme_attach_controller" 00:18:58.522 }' 00:18:58.522 [2024-12-06 13:26:45.089612] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:18:58.522 [2024-12-06 13:26:45.089680] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2155606 ] 00:18:58.782 [2024-12-06 13:26:45.185569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:58.782 [2024-12-06 13:26:45.246876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.782 [2024-12-06 13:26:45.247037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.782 [2024-12-06 13:26:45.247038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.042 I/O targets: 00:18:59.042 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:59.042 00:18:59.042 00:18:59.042 CUnit - A unit testing framework for C - Version 2.1-3 00:18:59.042 http://cunit.sourceforge.net/ 00:18:59.042 00:18:59.042 00:18:59.042 Suite: bdevio tests on: Nvme1n1 00:18:59.042 Test: blockdev write read block ...passed 00:18:59.042 Test: blockdev write zeroes read block ...passed 00:18:59.042 Test: blockdev write zeroes read no split ...passed 00:18:59.042 Test: blockdev write zeroes read split ...passed 00:18:59.042 Test: blockdev write zeroes read split partial ...passed 00:18:59.043 Test: blockdev reset ...[2024-12-06 13:26:45.693362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:59.043 [2024-12-06 13:26:45.693466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99b430 (9): Bad file descriptor 00:18:59.302 [2024-12-06 13:26:45.752710] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:59.302 passed 00:18:59.302 Test: blockdev write read 8 blocks ...passed 00:18:59.302 Test: blockdev write read size > 128k ...passed 00:18:59.302 Test: blockdev write read invalid size ...passed 00:18:59.302 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:59.303 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:59.303 Test: blockdev write read max offset ...passed 00:18:59.303 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:59.303 Test: blockdev writev readv 8 blocks ...passed 00:18:59.303 Test: blockdev writev readv 30 x 1block ...passed 00:18:59.563 Test: blockdev writev readv block ...passed 00:18:59.563 Test: blockdev writev readv size > 128k ...passed 00:18:59.563 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:59.563 Test: blockdev comparev and writev ...[2024-12-06 13:26:45.977633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.563 [2024-12-06 13:26:45.977683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.563 [2024-12-06 13:26:45.977700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.563 [2024-12-06 13:26:45.977709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:59.563 [2024-12-06 13:26:45.978252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.563 [2024-12-06 13:26:45.978270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:59.563 [2024-12-06 13:26:45.978285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.563 [2024-12-06 13:26:45.978297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:59.563 [2024-12-06 13:26:45.978841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.563 [2024-12-06 13:26:45.978858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.563 [2024-12-06 13:26:45.978872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.563 [2024-12-06 13:26:45.978880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:59.563 [2024-12-06 13:26:45.979408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.563 [2024-12-06 13:26:45.979423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:59.563 [2024-12-06 13:26:45.979437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.563 [2024-12-06 13:26:45.979446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:59.563 passed 00:18:59.563 Test: blockdev nvme passthru rw ...passed 00:18:59.563 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:26:46.065313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.563 [2024-12-06 13:26:46.065331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:59.563 [2024-12-06 13:26:46.065714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.563 [2024-12-06 13:26:46.065729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:59.563 [2024-12-06 13:26:46.066103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.563 [2024-12-06 13:26:46.066116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:59.563 [2024-12-06 13:26:46.066495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.563 [2024-12-06 13:26:46.066510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:59.563 passed 00:18:59.563 Test: blockdev nvme admin passthru ...passed 00:18:59.563 Test: blockdev copy ...passed 00:18:59.563 00:18:59.563 Run Summary: Type Total Ran Passed Failed Inactive 00:18:59.563 suites 1 1 n/a 0 0 00:18:59.563 tests 23 23 23 0 0 00:18:59.563 asserts 152 152 152 0 n/a 00:18:59.563 00:18:59.563 Elapsed time = 1.157 seconds 00:18:59.823 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:59.823 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.823 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:59.823 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.823 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:59.823 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:59.823 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:59.824 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:59.824 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:59.824 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:59.824 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:59.824 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:59.824 rmmod nvme_tcp 00:18:59.824 rmmod nvme_fabrics 00:19:00.084 rmmod nvme_keyring 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2155409 ']' 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2155409 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2155409 ']' 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2155409 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2155409 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2155409' 00:19:00.084 killing process with pid 2155409 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2155409 00:19:00.084 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2155409 00:19:00.345 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:00.345 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:00.345 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:00.345 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:00.345 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:00.345 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:00.345 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:00.345 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:00.345 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:00.345 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.345 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.345 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.899 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:02.899 00:19:02.899 real 0m12.693s 00:19:02.899 user 0m14.888s 00:19:02.899 sys 0m6.773s 00:19:02.899 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.899 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.899 ************************************ 00:19:02.899 END TEST nvmf_bdevio_no_huge 00:19:02.899 ************************************ 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:02.899 ************************************ 00:19:02.899 START TEST nvmf_tls 00:19:02.899 ************************************ 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:02.899 * Looking for test storage... 00:19:02.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:02.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.899 --rc genhtml_branch_coverage=1 00:19:02.899 --rc genhtml_function_coverage=1 00:19:02.899 --rc genhtml_legend=1 00:19:02.899 --rc geninfo_all_blocks=1 00:19:02.899 --rc geninfo_unexecuted_blocks=1 00:19:02.899 00:19:02.899 ' 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:02.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.899 --rc genhtml_branch_coverage=1 00:19:02.899 --rc genhtml_function_coverage=1 00:19:02.899 --rc genhtml_legend=1 00:19:02.899 --rc geninfo_all_blocks=1 00:19:02.899 --rc geninfo_unexecuted_blocks=1 00:19:02.899 00:19:02.899 ' 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:02.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.899 --rc genhtml_branch_coverage=1 00:19:02.899 --rc genhtml_function_coverage=1 00:19:02.899 --rc genhtml_legend=1 00:19:02.899 --rc geninfo_all_blocks=1 00:19:02.899 --rc geninfo_unexecuted_blocks=1 00:19:02.899 00:19:02.899 ' 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:02.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.899 --rc genhtml_branch_coverage=1 00:19:02.899 --rc genhtml_function_coverage=1 00:19:02.899 --rc genhtml_legend=1 00:19:02.899 --rc geninfo_all_blocks=1 00:19:02.899 --rc geninfo_unexecuted_blocks=1 00:19:02.899 00:19:02.899 ' 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.899 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:02.900 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:11.053 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:11.053 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:11.053 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:11.053 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:11.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:19:11.053 00:19:11.053 --- 10.0.0.2 ping statistics --- 00:19:11.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.053 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:19:11.053 00:19:11.053 --- 10.0.0.1 ping statistics --- 00:19:11.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.053 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2160115 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2160115 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2160115 ']' 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.053 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.053 [2024-12-06 13:26:56.880362] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:19:11.053 [2024-12-06 13:26:56.880430] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.053 [2024-12-06 13:26:56.984237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.053 [2024-12-06 13:26:57.035216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.053 [2024-12-06 13:26:57.035275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.053 [2024-12-06 13:26:57.035284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.053 [2024-12-06 13:26:57.035291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.053 [2024-12-06 13:26:57.035297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.053 [2024-12-06 13:26:57.036048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.053 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.053 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:11.053 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:11.053 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.053 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.313 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.313 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:11.313 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:11.313 true 00:19:11.313 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:11.313 13:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:11.573 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:11.573 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:11.573 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:11.833 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:11.833 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:12.094 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:12.094 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:12.094 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:12.094 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:12.094 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:12.354 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:12.354 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:12.354 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:12.354 13:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:12.613 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:12.613 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:12.614 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:12.614 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:12.614 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:12.872 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:12.872 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:12.872 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:13.132 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:13.132 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.TjylYQ9QYk 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.LOjW9muHfq 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.TjylYQ9QYk 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.LOjW9muHfq 00:19:13.392 13:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:13.653 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:13.914 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.TjylYQ9QYk 00:19:13.914 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.TjylYQ9QYk 00:19:13.914 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:14.174 [2024-12-06 13:27:00.571470] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.174 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:14.174 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:14.434 [2024-12-06 13:27:00.928332] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.434 [2024-12-06 13:27:00.928561] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.434 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:14.695 malloc0 00:19:14.695 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:14.695 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.TjylYQ9QYk 00:19:14.953 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:14.953 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TjylYQ9QYk 00:19:27.177 Initializing NVMe Controllers 00:19:27.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:27.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:27.177 Initialization complete. Launching workers. 00:19:27.177 ======================================================== 00:19:27.177 Latency(us) 00:19:27.177 Device Information : IOPS MiB/s Average min max 00:19:27.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18869.67 73.71 3391.87 1128.82 4393.44 00:19:27.177 ======================================================== 00:19:27.177 Total : 18869.67 73.71 3391.87 1128.82 4393.44 00:19:27.177 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TjylYQ9QYk 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TjylYQ9QYk 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2163634 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2163634 /var/tmp/bdevperf.sock 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2163634 ']' 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.177 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.177 [2024-12-06 13:27:11.773524] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:19:27.177 [2024-12-06 13:27:11.773579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163634 ] 00:19:27.177 [2024-12-06 13:27:11.860751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.177 [2024-12-06 13:27:11.896075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.177 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.177 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:27.177 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TjylYQ9QYk 00:19:27.177 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:27.177 [2024-12-06 13:27:12.860479] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.177 TLSTESTn1 00:19:27.177 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:27.177 Running I/O for 10 seconds... 00:19:28.829 5516.00 IOPS, 21.55 MiB/s [2024-12-06T12:27:16.058Z] 5285.00 IOPS, 20.64 MiB/s [2024-12-06T12:27:17.443Z] 5639.00 IOPS, 22.03 MiB/s [2024-12-06T12:27:18.384Z] 5679.25 IOPS, 22.18 MiB/s [2024-12-06T12:27:19.321Z] 5760.80 IOPS, 22.50 MiB/s [2024-12-06T12:27:20.262Z] 5782.50 IOPS, 22.59 MiB/s [2024-12-06T12:27:21.202Z] 5897.00 IOPS, 23.04 MiB/s [2024-12-06T12:27:22.144Z] 5964.50 IOPS, 23.30 MiB/s [2024-12-06T12:27:23.085Z] 6006.78 IOPS, 23.46 MiB/s [2024-12-06T12:27:23.085Z] 6033.50 IOPS, 23.57 MiB/s 00:19:36.426 Latency(us) 00:19:36.426 [2024-12-06T12:27:23.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.426 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:36.426 Verification LBA range: start 0x0 length 0x2000 00:19:36.426 TLSTESTn1 : 10.01 6038.80 23.59 0.00 0.00 21162.58 4997.12 48059.73 00:19:36.426 [2024-12-06T12:27:23.085Z] =================================================================================================================== 00:19:36.426 [2024-12-06T12:27:23.085Z] Total : 6038.80 23.59 0.00 0.00 21162.58 4997.12 48059.73 00:19:36.426 { 00:19:36.426 "results": [ 00:19:36.426 { 00:19:36.426 "job": "TLSTESTn1", 00:19:36.426 "core_mask": "0x4", 00:19:36.426 "workload": "verify", 00:19:36.426 "status": "finished", 00:19:36.426 "verify_range": { 00:19:36.426 "start": 0, 00:19:36.426 "length": 8192 00:19:36.426 }, 00:19:36.426 "queue_depth": 128, 00:19:36.426 "io_size": 4096, 00:19:36.426 "runtime": 10.012082, 00:19:36.426 "iops": 6038.803917107351, 00:19:36.426 "mibps": 23.589077801200588, 00:19:36.426 "io_failed": 0, 00:19:36.426 "io_timeout": 0, 00:19:36.426 "avg_latency_us": 21162.580854876145, 00:19:36.426 "min_latency_us": 4997.12, 00:19:36.426 "max_latency_us": 48059.73333333333 00:19:36.426 } 00:19:36.426 ], 00:19:36.426 "core_count": 1 00:19:36.426 } 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2163634 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2163634 ']' 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2163634 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2163634 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2163634' 00:19:36.686 killing process with pid 2163634 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2163634 00:19:36.686 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.686 00:19:36.686 Latency(us) 00:19:36.686 [2024-12-06T12:27:23.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.686 [2024-12-06T12:27:23.345Z] =================================================================================================================== 00:19:36.686 [2024-12-06T12:27:23.345Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2163634 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LOjW9muHfq 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LOjW9muHfq 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LOjW9muHfq 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LOjW9muHfq 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2165758 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2165758 /var/tmp/bdevperf.sock 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2165758 ']' 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.686 13:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.686 [2024-12-06 13:27:23.325329] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:19:36.686 [2024-12-06 13:27:23.325384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2165758 ] 00:19:36.947 [2024-12-06 13:27:23.409710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.947 [2024-12-06 13:27:23.438575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.517 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.517 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:37.517 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LOjW9muHfq 00:19:37.777 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:38.043 [2024-12-06 13:27:24.454318] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.043 [2024-12-06 13:27:24.458760] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:38.043 [2024-12-06 13:27:24.459385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1483800 (107): Transport endpoint is not connected 00:19:38.043 [2024-12-06 13:27:24.460380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1483800 (9): Bad file descriptor 00:19:38.043 [2024-12-06 13:27:24.461382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:38.043 [2024-12-06 13:27:24.461389] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:38.043 [2024-12-06 13:27:24.461395] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:38.043 [2024-12-06 13:27:24.461403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:38.043 request: 00:19:38.043 { 00:19:38.043 "name": "TLSTEST", 00:19:38.043 "trtype": "tcp", 00:19:38.043 "traddr": "10.0.0.2", 00:19:38.043 "adrfam": "ipv4", 00:19:38.043 "trsvcid": "4420", 00:19:38.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.043 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.043 "prchk_reftag": false, 00:19:38.043 "prchk_guard": false, 00:19:38.043 "hdgst": false, 00:19:38.043 "ddgst": false, 00:19:38.043 "psk": "key0", 00:19:38.043 "allow_unrecognized_csi": false, 00:19:38.043 "method": "bdev_nvme_attach_controller", 00:19:38.043 "req_id": 1 00:19:38.043 } 00:19:38.043 Got JSON-RPC error response 00:19:38.043 response: 00:19:38.043 { 00:19:38.043 "code": -5, 00:19:38.043 "message": "Input/output error" 00:19:38.043 } 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2165758 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2165758 ']' 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2165758 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2165758 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2165758' 00:19:38.043 killing process with pid 2165758 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2165758 00:19:38.043 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.043 00:19:38.043 Latency(us) 00:19:38.043 [2024-12-06T12:27:24.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.043 [2024-12-06T12:27:24.702Z] =================================================================================================================== 00:19:38.043 [2024-12-06T12:27:24.702Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2165758 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TjylYQ9QYk 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TjylYQ9QYk 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TjylYQ9QYk 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TjylYQ9QYk 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2166103 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2166103 /var/tmp/bdevperf.sock 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2166103 ']' 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.043 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.344 [2024-12-06 13:27:24.715690] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:19:38.344 [2024-12-06 13:27:24.715747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166103 ] 00:19:38.344 [2024-12-06 13:27:24.801075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.344 [2024-12-06 13:27:24.829011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.017 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.017 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:39.017 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TjylYQ9QYk 00:19:39.277 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:39.277 [2024-12-06 13:27:25.848424] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.277 [2024-12-06 13:27:25.852792] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:39.277 [2024-12-06 13:27:25.852813] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:39.277 [2024-12-06 13:27:25.852834] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:39.277 [2024-12-06 13:27:25.853472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecf800 (107): Transport endpoint is not connected 00:19:39.277 [2024-12-06 13:27:25.854466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecf800 (9): Bad file descriptor 00:19:39.277 [2024-12-06 13:27:25.855467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:39.277 [2024-12-06 13:27:25.855476] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:39.277 [2024-12-06 13:27:25.855482] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:39.277 [2024-12-06 13:27:25.855490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:39.277 request: 00:19:39.277 { 00:19:39.277 "name": "TLSTEST", 00:19:39.277 "trtype": "tcp", 00:19:39.277 "traddr": "10.0.0.2", 00:19:39.277 "adrfam": "ipv4", 00:19:39.277 "trsvcid": "4420", 00:19:39.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.277 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:39.277 "prchk_reftag": false, 00:19:39.277 "prchk_guard": false, 00:19:39.277 "hdgst": false, 00:19:39.277 "ddgst": false, 00:19:39.277 "psk": "key0", 00:19:39.277 "allow_unrecognized_csi": false, 00:19:39.277 "method": "bdev_nvme_attach_controller", 00:19:39.277 "req_id": 1 00:19:39.277 } 00:19:39.277 Got JSON-RPC error response 00:19:39.277 response: 00:19:39.277 { 00:19:39.277 "code": -5, 00:19:39.277 "message": "Input/output error" 00:19:39.277 } 00:19:39.277 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2166103 00:19:39.277 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2166103 ']' 00:19:39.277 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2166103 00:19:39.277 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:39.277 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.277 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2166103 00:19:39.537 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:39.537 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:39.537 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2166103' 00:19:39.537 killing process with pid 2166103 00:19:39.537 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2166103 00:19:39.537 Received shutdown signal, test time was about 10.000000 seconds 00:19:39.537 00:19:39.537 Latency(us) 00:19:39.537 [2024-12-06T12:27:26.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.537 [2024-12-06T12:27:26.196Z] =================================================================================================================== 00:19:39.537 [2024-12-06T12:27:26.196Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:39.537 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2166103 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TjylYQ9QYk 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TjylYQ9QYk 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TjylYQ9QYk 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.TjylYQ9QYk 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2166450 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2166450 /var/tmp/bdevperf.sock 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2166450 ']' 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.537 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.538 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.538 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.538 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.538 [2024-12-06 13:27:26.099699] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:19:39.538 [2024-12-06 13:27:26.099755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166450 ] 00:19:39.538 [2024-12-06 13:27:26.184476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.799 [2024-12-06 13:27:26.212549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.369 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.369 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:40.369 13:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TjylYQ9QYk 00:19:40.629 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.629 [2024-12-06 13:27:27.240172] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.629 [2024-12-06 13:27:27.244794] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:40.629 [2024-12-06 13:27:27.244813] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:40.629 [2024-12-06 13:27:27.244833] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:40.629 [2024-12-06 13:27:27.245376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebb800 (107): Transport endpoint is not connected 00:19:40.629 [2024-12-06 13:27:27.246371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebb800 (9): Bad file descriptor 00:19:40.630 [2024-12-06 13:27:27.247373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:40.630 [2024-12-06 13:27:27.247382] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:40.630 [2024-12-06 13:27:27.247387] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:40.630 [2024-12-06 13:27:27.247395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:40.630 request: 00:19:40.630 { 00:19:40.630 "name": "TLSTEST", 00:19:40.630 "trtype": "tcp", 00:19:40.630 "traddr": "10.0.0.2", 00:19:40.630 "adrfam": "ipv4", 00:19:40.630 "trsvcid": "4420", 00:19:40.630 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:40.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.630 "prchk_reftag": false, 00:19:40.630 "prchk_guard": false, 00:19:40.630 "hdgst": false, 00:19:40.630 "ddgst": false, 00:19:40.630 "psk": "key0", 00:19:40.630 "allow_unrecognized_csi": false, 00:19:40.630 "method": "bdev_nvme_attach_controller", 00:19:40.630 "req_id": 1 00:19:40.630 } 00:19:40.630 Got JSON-RPC error response 00:19:40.630 response: 00:19:40.630 { 00:19:40.630 "code": -5, 00:19:40.630 "message": "Input/output error" 00:19:40.630 } 00:19:40.630 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2166450 00:19:40.630 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2166450 ']' 00:19:40.630 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2166450 00:19:40.630 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:40.630 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.630 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2166450 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2166450' 00:19:40.891 killing process with pid 2166450 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2166450 00:19:40.891 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.891 00:19:40.891 Latency(us) 00:19:40.891 [2024-12-06T12:27:27.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.891 [2024-12-06T12:27:27.550Z] =================================================================================================================== 00:19:40.891 [2024-12-06T12:27:27.550Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2166450 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2166765 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2166765 /var/tmp/bdevperf.sock 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2166765 ']' 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.891 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.891 [2024-12-06 13:27:27.489346] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:19:40.891 [2024-12-06 13:27:27.489403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166765 ] 00:19:41.152 [2024-12-06 13:27:27.573784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.152 [2024-12-06 13:27:27.602093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.724 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.724 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:41.724 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:41.985 [2024-12-06 13:27:28.445268] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:41.985 [2024-12-06 13:27:28.445296] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:41.985 request: 00:19:41.985 { 00:19:41.985 "name": "key0", 00:19:41.985 "path": "", 00:19:41.985 "method": "keyring_file_add_key", 00:19:41.985 "req_id": 1 00:19:41.985 } 00:19:41.985 Got JSON-RPC error response 00:19:41.985 response: 00:19:41.985 { 00:19:41.985 "code": -1, 00:19:41.985 "message": "Operation not permitted" 00:19:41.985 } 00:19:41.985 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:41.985 [2024-12-06 13:27:28.625801] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:41.985 [2024-12-06 13:27:28.625828] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:41.985 request: 00:19:41.985 { 00:19:41.985 "name": "TLSTEST", 00:19:41.985 "trtype": "tcp", 00:19:41.985 "traddr": "10.0.0.2", 00:19:41.985 "adrfam": "ipv4", 00:19:41.985 "trsvcid": "4420", 00:19:41.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.985 "prchk_reftag": false, 00:19:41.985 "prchk_guard": false, 00:19:41.985 "hdgst": false, 00:19:41.985 "ddgst": false, 00:19:41.985 "psk": "key0", 00:19:41.985 "allow_unrecognized_csi": false, 00:19:41.985 "method": "bdev_nvme_attach_controller", 00:19:41.985 "req_id": 1 00:19:41.985 } 00:19:41.985 Got JSON-RPC error response 00:19:41.985 response: 00:19:41.985 { 00:19:41.986 "code": -126, 00:19:41.986 "message": "Required key not available" 00:19:41.986 } 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2166765 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2166765 ']' 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2166765 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2166765 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2166765' 00:19:42.247 killing process with pid 2166765 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2166765 00:19:42.247 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.247 00:19:42.247 Latency(us) 00:19:42.247 [2024-12-06T12:27:28.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.247 [2024-12-06T12:27:28.906Z] =================================================================================================================== 00:19:42.247 [2024-12-06T12:27:28.906Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2166765 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2160115 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2160115 ']' 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2160115 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2160115 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2160115' 00:19:42.247 killing process with pid 2160115 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2160115 00:19:42.247 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2160115 00:19:42.509 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:42.509 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:42.509 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:42.509 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:42.509 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:42.509 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:42.509 13:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.iiraHCqYml 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.iiraHCqYml 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2167099 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2167099 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2167099 ']' 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.509 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.509 [2024-12-06 13:27:29.111041] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:19:42.509 [2024-12-06 13:27:29.111098] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.770 [2024-12-06 13:27:29.204153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.770 [2024-12-06 13:27:29.241409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.770 [2024-12-06 13:27:29.241450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.770 [2024-12-06 13:27:29.241462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.770 [2024-12-06 13:27:29.241468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.770 [2024-12-06 13:27:29.241472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.770 [2024-12-06 13:27:29.242023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.342 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.342 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:43.342 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:43.342 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:43.342 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.343 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.343 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.iiraHCqYml 00:19:43.343 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iiraHCqYml 00:19:43.343 13:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:43.604 [2024-12-06 13:27:30.118992] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.604 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:43.865 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:43.865 [2024-12-06 13:27:30.455803] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:43.865 [2024-12-06 13:27:30.455999] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.865 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:44.127 malloc0 00:19:44.127 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:44.388 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iiraHCqYml 00:19:44.388 13:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:44.649 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iiraHCqYml 00:19:44.649 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:44.649 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:44.649 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:44.649 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iiraHCqYml 00:19:44.649 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:44.650 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:44.650 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2167511 00:19:44.650 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:44.650 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2167511 /var/tmp/bdevperf.sock 00:19:44.650 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2167511 ']' 00:19:44.650 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.650 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.650 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.650 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.650 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.650 [2024-12-06 13:27:31.196601] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:19:44.650 [2024-12-06 13:27:31.196652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167511 ] 00:19:44.650 [2024-12-06 13:27:31.281421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.911 [2024-12-06 13:27:31.310758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.911 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.911 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:44.911 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iiraHCqYml 00:19:45.172 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.172 [2024-12-06 13:27:31.732794] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.172 TLSTESTn1 00:19:45.172 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:45.433 Running I/O for 10 seconds... 00:19:47.320 4899.00 IOPS, 19.14 MiB/s [2024-12-06T12:27:34.922Z] 4883.00 IOPS, 19.07 MiB/s [2024-12-06T12:27:36.305Z] 4867.33 IOPS, 19.01 MiB/s [2024-12-06T12:27:37.247Z] 5087.50 IOPS, 19.87 MiB/s [2024-12-06T12:27:38.188Z] 5242.80 IOPS, 20.48 MiB/s [2024-12-06T12:27:39.129Z] 5247.83 IOPS, 20.50 MiB/s [2024-12-06T12:27:40.071Z] 5178.86 IOPS, 20.23 MiB/s [2024-12-06T12:27:41.014Z] 5304.00 IOPS, 20.72 MiB/s [2024-12-06T12:27:41.957Z] 5395.11 IOPS, 21.07 MiB/s [2024-12-06T12:27:42.218Z] 5359.00 IOPS, 20.93 MiB/s 00:19:55.559 Latency(us) 00:19:55.559 [2024-12-06T12:27:42.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.559 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:55.559 Verification LBA range: start 0x0 length 0x2000 00:19:55.559 TLSTESTn1 : 10.05 5345.92 20.88 0.00 0.00 23875.59 4560.21 47404.37 00:19:55.559 [2024-12-06T12:27:42.218Z] =================================================================================================================== 00:19:55.559 [2024-12-06T12:27:42.218Z] Total : 5345.92 20.88 0.00 0.00 23875.59 4560.21 47404.37 00:19:55.559 { 00:19:55.559 "results": [ 00:19:55.559 { 00:19:55.559 "job": "TLSTESTn1", 00:19:55.559 "core_mask": "0x4", 00:19:55.559 "workload": "verify", 00:19:55.559 "status": "finished", 00:19:55.559 "verify_range": { 00:19:55.559 "start": 0, 00:19:55.559 "length": 8192 00:19:55.559 }, 00:19:55.559 "queue_depth": 128, 00:19:55.559 "io_size": 4096, 00:19:55.559 "runtime": 10.048227, 00:19:55.559 "iops": 5345.918240103453, 00:19:55.559 "mibps": 20.882493125404114, 00:19:55.559 "io_failed": 0, 00:19:55.559 "io_timeout": 0, 00:19:55.559 "avg_latency_us": 23875.58605878958, 00:19:55.559 "min_latency_us": 4560.213333333333, 00:19:55.559 "max_latency_us": 47404.37333333334 00:19:55.559 } 00:19:55.559 ], 00:19:55.559 "core_count": 1 00:19:55.559 } 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2167511 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2167511 ']' 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2167511 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2167511 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2167511' 00:19:55.559 killing process with pid 2167511 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2167511 00:19:55.559 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.559 00:19:55.559 Latency(us) 00:19:55.559 [2024-12-06T12:27:42.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.559 [2024-12-06T12:27:42.218Z] =================================================================================================================== 00:19:55.559 [2024-12-06T12:27:42.218Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2167511 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.iiraHCqYml 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iiraHCqYml 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iiraHCqYml 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iiraHCqYml 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iiraHCqYml 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2169530 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2169530 /var/tmp/bdevperf.sock 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2169530 ']' 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.559 13:27:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.819 [2024-12-06 13:27:42.232079] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:19:55.819 [2024-12-06 13:27:42.232135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169530 ] 00:19:55.819 [2024-12-06 13:27:42.315148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.819 [2024-12-06 13:27:42.343126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.391 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.391 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:56.391 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iiraHCqYml 00:19:56.650 [2024-12-06 13:27:43.170227] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.iiraHCqYml': 0100666 00:19:56.650 [2024-12-06 13:27:43.170249] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:56.650 request: 00:19:56.650 { 00:19:56.650 "name": "key0", 00:19:56.650 "path": "/tmp/tmp.iiraHCqYml", 00:19:56.650 "method": "keyring_file_add_key", 00:19:56.650 "req_id": 1 00:19:56.650 } 00:19:56.650 Got JSON-RPC error response 00:19:56.650 response: 00:19:56.651 { 00:19:56.651 "code": -1, 00:19:56.651 "message": "Operation not permitted" 00:19:56.651 } 00:19:56.651 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:56.911 [2024-12-06 13:27:43.338725] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.911 [2024-12-06 13:27:43.338753] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:56.911 request: 00:19:56.911 { 00:19:56.911 "name": "TLSTEST", 00:19:56.911 "trtype": "tcp", 00:19:56.911 "traddr": "10.0.0.2", 00:19:56.911 "adrfam": "ipv4", 00:19:56.911 "trsvcid": "4420", 00:19:56.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.911 "prchk_reftag": false, 00:19:56.911 "prchk_guard": false, 00:19:56.911 "hdgst": false, 00:19:56.911 "ddgst": false, 00:19:56.911 "psk": "key0", 00:19:56.911 "allow_unrecognized_csi": false, 00:19:56.911 "method": "bdev_nvme_attach_controller", 00:19:56.911 "req_id": 1 00:19:56.911 } 00:19:56.911 Got JSON-RPC error response 00:19:56.911 response: 00:19:56.911 { 00:19:56.911 "code": -126, 00:19:56.911 "message": "Required key not available" 00:19:56.911 } 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2169530 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2169530 ']' 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2169530 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2169530 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2169530' 00:19:56.911 killing process with pid 2169530 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2169530 00:19:56.911 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.911 00:19:56.911 Latency(us) 00:19:56.911 [2024-12-06T12:27:43.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.911 [2024-12-06T12:27:43.570Z] =================================================================================================================== 00:19:56.911 [2024-12-06T12:27:43.570Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2169530 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2167099 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2167099 ']' 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2167099 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.911 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2167099 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2167099' 00:19:57.171 killing process with pid 2167099 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2167099 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2167099 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2169877 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2169877 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2169877 ']' 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.171 13:27:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.171 [2024-12-06 13:27:43.763261] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:19:57.171 [2024-12-06 13:27:43.763318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.432 [2024-12-06 13:27:43.853162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.432 [2024-12-06 13:27:43.882166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.432 [2024-12-06 13:27:43.882193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.432 [2024-12-06 13:27:43.882199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.432 [2024-12-06 13:27:43.882204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.432 [2024-12-06 13:27:43.882208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.432 [2024-12-06 13:27:43.882657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.iiraHCqYml 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.iiraHCqYml 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.iiraHCqYml 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iiraHCqYml 00:19:58.002 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:58.263 [2024-12-06 13:27:44.730659] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.263 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:58.263 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:58.523 [2024-12-06 13:27:45.067482] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:58.523 [2024-12-06 13:27:45.067686] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.523 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:58.789 malloc0 00:19:58.789 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:58.789 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iiraHCqYml 00:19:59.049 [2024-12-06 13:27:45.558561] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.iiraHCqYml': 0100666 00:19:59.049 [2024-12-06 13:27:45.558583] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:59.049 request: 00:19:59.049 { 00:19:59.049 "name": "key0", 00:19:59.049 "path": "/tmp/tmp.iiraHCqYml", 00:19:59.049 "method": "keyring_file_add_key", 00:19:59.049 "req_id": 1 00:19:59.049 } 00:19:59.049 Got JSON-RPC error response 00:19:59.049 response: 00:19:59.049 { 00:19:59.049 "code": -1, 00:19:59.049 "message": "Operation not permitted" 00:19:59.049 } 00:19:59.049 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:59.309 [2024-12-06 13:27:45.710959] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:59.309 [2024-12-06 13:27:45.710986] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:59.309 request: 00:19:59.309 { 00:19:59.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.309 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.309 "psk": "key0", 00:19:59.309 "method": "nvmf_subsystem_add_host", 00:19:59.309 "req_id": 1 00:19:59.309 } 00:19:59.309 Got JSON-RPC error response 00:19:59.309 response: 00:19:59.309 { 00:19:59.309 "code": -32603, 00:19:59.309 "message": "Internal error" 00:19:59.309 } 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2169877 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2169877 ']' 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2169877 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2169877 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2169877' 00:19:59.309 killing process with pid 2169877 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2169877 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2169877 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.iiraHCqYml 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2170269 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2170269 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2170269 ']' 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.309 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.309 [2024-12-06 13:27:45.964338] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:19:59.309 [2024-12-06 13:27:45.964394] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.569 [2024-12-06 13:27:46.054691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.569 [2024-12-06 13:27:46.086124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.569 [2024-12-06 13:27:46.086154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.569 [2024-12-06 13:27:46.086160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.569 [2024-12-06 13:27:46.086165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.569 [2024-12-06 13:27:46.086169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.569 [2024-12-06 13:27:46.086666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.141 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.141 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:00.141 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.141 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.141 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.402 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.402 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.iiraHCqYml 00:20:00.402 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iiraHCqYml 00:20:00.402 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:00.402 [2024-12-06 13:27:46.968625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.402 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:00.663 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:00.663 [2024-12-06 13:27:47.289399] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.663 [2024-12-06 13:27:47.289601] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.663 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:00.925 malloc0 00:20:00.925 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:01.186 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iiraHCqYml 00:20:01.186 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.448 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2170784 00:20:01.448 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.448 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.448 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2170784 /var/tmp/bdevperf.sock 00:20:01.448 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2170784 ']' 00:20:01.448 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.448 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.448 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.448 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.448 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.448 [2024-12-06 13:27:48.013791] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:01.448 [2024-12-06 13:27:48.013844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170784 ] 00:20:01.448 [2024-12-06 13:27:48.095647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.709 [2024-12-06 13:27:48.124729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.280 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.280 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:02.280 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iiraHCqYml 00:20:02.541 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.541 [2024-12-06 13:27:49.120267] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.541 TLSTESTn1 00:20:02.802 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:03.063 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:03.063 "subsystems": [ 00:20:03.063 { 00:20:03.063 "subsystem": "keyring", 00:20:03.063 "config": [ 00:20:03.063 { 00:20:03.063 "method": "keyring_file_add_key", 00:20:03.063 "params": { 00:20:03.063 "name": "key0", 00:20:03.063 "path": "/tmp/tmp.iiraHCqYml" 00:20:03.063 } 00:20:03.063 } 00:20:03.063 ] 00:20:03.063 }, 00:20:03.063 { 00:20:03.063 "subsystem": "iobuf", 00:20:03.063 "config": [ 00:20:03.063 { 00:20:03.063 "method": "iobuf_set_options", 00:20:03.063 "params": { 00:20:03.063 "small_pool_count": 8192, 00:20:03.063 "large_pool_count": 1024, 00:20:03.063 "small_bufsize": 8192, 00:20:03.063 "large_bufsize": 135168, 00:20:03.063 "enable_numa": false 00:20:03.063 } 00:20:03.063 } 00:20:03.063 ] 00:20:03.063 }, 00:20:03.063 { 00:20:03.063 "subsystem": "sock", 00:20:03.063 "config": [ 00:20:03.063 { 00:20:03.063 "method": "sock_set_default_impl", 00:20:03.063 "params": { 00:20:03.063 "impl_name": "posix" 00:20:03.063 } 00:20:03.063 }, 00:20:03.063 { 00:20:03.063 "method": "sock_impl_set_options", 00:20:03.063 "params": { 00:20:03.063 "impl_name": "ssl", 00:20:03.063 "recv_buf_size": 4096, 00:20:03.063 "send_buf_size": 4096, 00:20:03.063 "enable_recv_pipe": true, 00:20:03.063 "enable_quickack": false, 00:20:03.063 "enable_placement_id": 0, 00:20:03.063 "enable_zerocopy_send_server": true, 00:20:03.063 "enable_zerocopy_send_client": false, 00:20:03.063 "zerocopy_threshold": 0, 00:20:03.063 "tls_version": 0, 00:20:03.063 "enable_ktls": false 00:20:03.063 } 00:20:03.063 }, 00:20:03.063 { 00:20:03.063 "method": "sock_impl_set_options", 00:20:03.063 "params": { 00:20:03.063 "impl_name": "posix", 00:20:03.063 "recv_buf_size": 2097152, 00:20:03.063 "send_buf_size": 2097152, 00:20:03.063 "enable_recv_pipe": true, 00:20:03.063 "enable_quickack": false, 00:20:03.063 "enable_placement_id": 0, 00:20:03.063 "enable_zerocopy_send_server": true, 00:20:03.063 "enable_zerocopy_send_client": false, 00:20:03.063 "zerocopy_threshold": 0, 00:20:03.063 "tls_version": 0, 00:20:03.063 "enable_ktls": false 00:20:03.063 } 00:20:03.063 } 00:20:03.063 ] 00:20:03.063 }, 00:20:03.063 { 00:20:03.063 "subsystem": "vmd", 00:20:03.063 "config": [] 00:20:03.063 }, 00:20:03.063 { 00:20:03.063 "subsystem": "accel", 00:20:03.063 "config": [ 00:20:03.063 { 00:20:03.063 "method": "accel_set_options", 00:20:03.063 "params": { 00:20:03.063 "small_cache_size": 128, 00:20:03.063 "large_cache_size": 16, 00:20:03.063 "task_count": 2048, 00:20:03.063 "sequence_count": 2048, 00:20:03.063 "buf_count": 2048 00:20:03.063 } 00:20:03.063 } 00:20:03.063 ] 00:20:03.063 }, 00:20:03.063 { 00:20:03.063 "subsystem": "bdev", 00:20:03.063 "config": [ 00:20:03.063 { 00:20:03.063 "method": "bdev_set_options", 00:20:03.063 "params": { 00:20:03.063 "bdev_io_pool_size": 65535, 00:20:03.063 "bdev_io_cache_size": 256, 00:20:03.063 "bdev_auto_examine": true, 00:20:03.063 "iobuf_small_cache_size": 128, 00:20:03.063 "iobuf_large_cache_size": 16 00:20:03.063 } 00:20:03.063 }, 00:20:03.063 { 00:20:03.063 "method": "bdev_raid_set_options", 00:20:03.063 "params": { 00:20:03.063 "process_window_size_kb": 1024, 00:20:03.063 "process_max_bandwidth_mb_sec": 0 00:20:03.063 } 00:20:03.063 }, 00:20:03.063 { 00:20:03.063 "method": "bdev_iscsi_set_options", 00:20:03.063 "params": { 00:20:03.063 "timeout_sec": 30 00:20:03.063 } 00:20:03.063 }, 00:20:03.063 { 00:20:03.063 "method": "bdev_nvme_set_options", 00:20:03.063 "params": { 00:20:03.063 "action_on_timeout": "none", 00:20:03.063 "timeout_us": 0, 00:20:03.063 "timeout_admin_us": 0, 00:20:03.063 "keep_alive_timeout_ms": 10000, 00:20:03.063 "arbitration_burst": 0, 00:20:03.063 "low_priority_weight": 0, 00:20:03.063 "medium_priority_weight": 0, 00:20:03.063 "high_priority_weight": 0, 00:20:03.063 "nvme_adminq_poll_period_us": 10000, 00:20:03.063 "nvme_ioq_poll_period_us": 0, 00:20:03.063 "io_queue_requests": 0, 00:20:03.063 "delay_cmd_submit": true, 00:20:03.063 "transport_retry_count": 4, 00:20:03.063 "bdev_retry_count": 3, 00:20:03.063 "transport_ack_timeout": 0, 00:20:03.064 "ctrlr_loss_timeout_sec": 0, 00:20:03.064 "reconnect_delay_sec": 0, 00:20:03.064 "fast_io_fail_timeout_sec": 0, 00:20:03.064 "disable_auto_failback": false, 00:20:03.064 "generate_uuids": false, 00:20:03.064 "transport_tos": 0, 00:20:03.064 "nvme_error_stat": false, 00:20:03.064 "rdma_srq_size": 0, 00:20:03.064 "io_path_stat": false, 00:20:03.064 "allow_accel_sequence": false, 00:20:03.064 "rdma_max_cq_size": 0, 00:20:03.064 "rdma_cm_event_timeout_ms": 0, 00:20:03.064 "dhchap_digests": [ 00:20:03.064 "sha256", 00:20:03.064 "sha384", 00:20:03.064 "sha512" 00:20:03.064 ], 00:20:03.064 "dhchap_dhgroups": [ 00:20:03.064 "null", 00:20:03.064 "ffdhe2048", 00:20:03.064 "ffdhe3072", 00:20:03.064 "ffdhe4096", 00:20:03.064 "ffdhe6144", 00:20:03.064 "ffdhe8192" 00:20:03.064 ] 00:20:03.064 } 00:20:03.064 }, 00:20:03.064 { 00:20:03.064 "method": "bdev_nvme_set_hotplug", 00:20:03.064 "params": { 00:20:03.064 "period_us": 100000, 00:20:03.064 "enable": false 00:20:03.064 } 00:20:03.064 }, 00:20:03.064 { 00:20:03.064 "method": "bdev_malloc_create", 00:20:03.064 "params": { 00:20:03.064 "name": "malloc0", 00:20:03.064 "num_blocks": 8192, 00:20:03.064 "block_size": 4096, 00:20:03.064 "physical_block_size": 4096, 00:20:03.064 "uuid": "05d45815-c660-4484-8554-ae4121bb6e60", 00:20:03.064 "optimal_io_boundary": 0, 00:20:03.064 "md_size": 0, 00:20:03.064 "dif_type": 0, 00:20:03.064 "dif_is_head_of_md": false, 00:20:03.064 "dif_pi_format": 0 00:20:03.064 } 00:20:03.064 }, 00:20:03.064 { 00:20:03.064 "method": "bdev_wait_for_examine" 00:20:03.064 } 00:20:03.064 ] 00:20:03.064 }, 00:20:03.064 { 00:20:03.064 "subsystem": "nbd", 00:20:03.064 "config": [] 00:20:03.064 }, 00:20:03.064 { 00:20:03.064 "subsystem": "scheduler", 00:20:03.064 "config": [ 00:20:03.064 { 00:20:03.064 "method": "framework_set_scheduler", 00:20:03.064 "params": { 00:20:03.064 "name": "static" 00:20:03.064 } 00:20:03.064 } 00:20:03.064 ] 00:20:03.064 }, 00:20:03.064 { 00:20:03.064 "subsystem": "nvmf", 00:20:03.064 "config": [ 00:20:03.064 { 00:20:03.064 "method": "nvmf_set_config", 00:20:03.064 "params": { 00:20:03.064 "discovery_filter": "match_any", 00:20:03.064 "admin_cmd_passthru": { 00:20:03.064 "identify_ctrlr": false 00:20:03.064 }, 00:20:03.064 "dhchap_digests": [ 00:20:03.064 "sha256", 00:20:03.064 "sha384", 00:20:03.064 "sha512" 00:20:03.064 ], 00:20:03.064 "dhchap_dhgroups": [ 00:20:03.064 "null", 00:20:03.064 "ffdhe2048", 00:20:03.064 "ffdhe3072", 00:20:03.064 "ffdhe4096", 00:20:03.064 "ffdhe6144", 00:20:03.064 "ffdhe8192" 00:20:03.064 ] 00:20:03.064 } 00:20:03.064 }, 00:20:03.064 { 00:20:03.064 "method": "nvmf_set_max_subsystems", 00:20:03.064 "params": { 00:20:03.064 "max_subsystems": 1024 00:20:03.064 } 00:20:03.064 }, 00:20:03.064 { 00:20:03.064 "method": "nvmf_set_crdt", 00:20:03.064 "params": { 00:20:03.064 "crdt1": 0, 00:20:03.064 "crdt2": 0, 00:20:03.064 "crdt3": 0 00:20:03.064 } 00:20:03.064 }, 00:20:03.064 { 00:20:03.064 "method": "nvmf_create_transport", 00:20:03.064 "params": { 00:20:03.064 "trtype": "TCP", 00:20:03.064 "max_queue_depth": 128, 00:20:03.064 "max_io_qpairs_per_ctrlr": 127, 00:20:03.064 "in_capsule_data_size": 4096, 00:20:03.064 "max_io_size": 131072, 00:20:03.064 "io_unit_size": 131072, 00:20:03.064 "max_aq_depth": 128, 00:20:03.064 "num_shared_buffers": 511, 00:20:03.064 "buf_cache_size": 4294967295, 00:20:03.064 "dif_insert_or_strip": false, 00:20:03.064 "zcopy": false, 00:20:03.064 "c2h_success": false, 00:20:03.064 "sock_priority": 0, 00:20:03.064 "abort_timeout_sec": 1, 00:20:03.064 "ack_timeout": 0, 00:20:03.064 "data_wr_pool_size": 0 00:20:03.064 } 00:20:03.064 }, 00:20:03.064 { 00:20:03.064 "method": "nvmf_create_subsystem", 00:20:03.064 "params": { 00:20:03.064 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.064 "allow_any_host": false, 00:20:03.064 "serial_number": "SPDK00000000000001", 00:20:03.064 "model_number": "SPDK bdev Controller", 00:20:03.064 "max_namespaces": 10, 00:20:03.064 "min_cntlid": 1, 00:20:03.064 "max_cntlid": 65519, 00:20:03.064 "ana_reporting": false 00:20:03.064 } 00:20:03.064 }, 00:20:03.064 { 00:20:03.064 "method": "nvmf_subsystem_add_host", 00:20:03.064 "params": { 00:20:03.064 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.064 "host": "nqn.2016-06.io.spdk:host1", 00:20:03.064 "psk": "key0" 00:20:03.064 } 00:20:03.064 }, 00:20:03.064 { 00:20:03.064 "method": "nvmf_subsystem_add_ns", 00:20:03.064 "params": { 00:20:03.064 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.064 "namespace": { 00:20:03.064 "nsid": 1, 00:20:03.064 "bdev_name": "malloc0", 00:20:03.064 "nguid": "05D45815C66044848554AE4121BB6E60", 00:20:03.064 "uuid": "05d45815-c660-4484-8554-ae4121bb6e60", 00:20:03.064 "no_auto_visible": false 00:20:03.064 } 00:20:03.064 } 00:20:03.064 }, 00:20:03.064 { 00:20:03.064 "method": "nvmf_subsystem_add_listener", 00:20:03.064 "params": { 00:20:03.064 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.064 "listen_address": { 00:20:03.064 "trtype": "TCP", 00:20:03.064 "adrfam": "IPv4", 00:20:03.064 "traddr": "10.0.0.2", 00:20:03.064 "trsvcid": "4420" 00:20:03.064 }, 00:20:03.064 "secure_channel": true 00:20:03.064 } 00:20:03.064 } 00:20:03.064 ] 00:20:03.064 } 00:20:03.064 ] 00:20:03.064 }' 00:20:03.064 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:03.325 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:03.325 "subsystems": [ 00:20:03.325 { 00:20:03.325 "subsystem": "keyring", 00:20:03.325 "config": [ 00:20:03.325 { 00:20:03.325 "method": "keyring_file_add_key", 00:20:03.325 "params": { 00:20:03.325 "name": "key0", 00:20:03.325 "path": "/tmp/tmp.iiraHCqYml" 00:20:03.325 } 00:20:03.325 } 00:20:03.325 ] 00:20:03.325 }, 00:20:03.325 { 00:20:03.325 "subsystem": "iobuf", 00:20:03.325 "config": [ 00:20:03.325 { 00:20:03.325 "method": "iobuf_set_options", 00:20:03.325 "params": { 00:20:03.325 "small_pool_count": 8192, 00:20:03.325 "large_pool_count": 1024, 00:20:03.325 "small_bufsize": 8192, 00:20:03.325 "large_bufsize": 135168, 00:20:03.325 "enable_numa": false 00:20:03.325 } 00:20:03.325 } 00:20:03.325 ] 00:20:03.325 }, 00:20:03.325 { 00:20:03.325 "subsystem": "sock", 00:20:03.325 "config": [ 00:20:03.325 { 00:20:03.325 "method": "sock_set_default_impl", 00:20:03.325 "params": { 00:20:03.325 "impl_name": "posix" 00:20:03.325 } 00:20:03.325 }, 00:20:03.325 { 00:20:03.325 "method": "sock_impl_set_options", 00:20:03.325 "params": { 00:20:03.325 "impl_name": "ssl", 00:20:03.325 "recv_buf_size": 4096, 00:20:03.325 "send_buf_size": 4096, 00:20:03.325 "enable_recv_pipe": true, 00:20:03.325 "enable_quickack": false, 00:20:03.325 "enable_placement_id": 0, 00:20:03.325 "enable_zerocopy_send_server": true, 00:20:03.325 "enable_zerocopy_send_client": false, 00:20:03.325 "zerocopy_threshold": 0, 00:20:03.325 "tls_version": 0, 00:20:03.325 "enable_ktls": false 00:20:03.325 } 00:20:03.325 }, 00:20:03.325 { 00:20:03.325 "method": "sock_impl_set_options", 00:20:03.325 "params": { 00:20:03.326 "impl_name": "posix", 00:20:03.326 "recv_buf_size": 2097152, 00:20:03.326 "send_buf_size": 2097152, 00:20:03.326 "enable_recv_pipe": true, 00:20:03.326 "enable_quickack": false, 00:20:03.326 "enable_placement_id": 0, 00:20:03.326 "enable_zerocopy_send_server": true, 00:20:03.326 "enable_zerocopy_send_client": false, 00:20:03.326 "zerocopy_threshold": 0, 00:20:03.326 "tls_version": 0, 00:20:03.326 "enable_ktls": false 00:20:03.326 } 00:20:03.326 } 00:20:03.326 ] 00:20:03.326 }, 00:20:03.326 { 00:20:03.326 "subsystem": "vmd", 00:20:03.326 "config": [] 00:20:03.326 }, 00:20:03.326 { 00:20:03.326 "subsystem": "accel", 00:20:03.326 "config": [ 00:20:03.326 { 00:20:03.326 "method": "accel_set_options", 00:20:03.326 "params": { 00:20:03.326 "small_cache_size": 128, 00:20:03.326 "large_cache_size": 16, 00:20:03.326 "task_count": 2048, 00:20:03.326 "sequence_count": 2048, 00:20:03.326 "buf_count": 2048 00:20:03.326 } 00:20:03.326 } 00:20:03.326 ] 00:20:03.326 }, 00:20:03.326 { 00:20:03.326 "subsystem": "bdev", 00:20:03.326 "config": [ 00:20:03.326 { 00:20:03.326 "method": "bdev_set_options", 00:20:03.326 "params": { 00:20:03.326 "bdev_io_pool_size": 65535, 00:20:03.326 "bdev_io_cache_size": 256, 00:20:03.326 "bdev_auto_examine": true, 00:20:03.326 "iobuf_small_cache_size": 128, 00:20:03.326 "iobuf_large_cache_size": 16 00:20:03.326 } 00:20:03.326 }, 00:20:03.326 { 00:20:03.326 "method": "bdev_raid_set_options", 00:20:03.326 "params": { 00:20:03.326 "process_window_size_kb": 1024, 00:20:03.326 "process_max_bandwidth_mb_sec": 0 00:20:03.326 } 00:20:03.326 }, 00:20:03.326 { 00:20:03.326 "method": "bdev_iscsi_set_options", 00:20:03.326 "params": { 00:20:03.326 "timeout_sec": 30 00:20:03.326 } 00:20:03.326 }, 00:20:03.326 { 00:20:03.326 "method": "bdev_nvme_set_options", 00:20:03.326 "params": { 00:20:03.326 "action_on_timeout": "none", 00:20:03.326 "timeout_us": 0, 00:20:03.326 "timeout_admin_us": 0, 00:20:03.326 "keep_alive_timeout_ms": 10000, 00:20:03.326 "arbitration_burst": 0, 00:20:03.326 "low_priority_weight": 0, 00:20:03.326 "medium_priority_weight": 0, 00:20:03.326 "high_priority_weight": 0, 00:20:03.326 "nvme_adminq_poll_period_us": 10000, 00:20:03.326 "nvme_ioq_poll_period_us": 0, 00:20:03.326 "io_queue_requests": 512, 00:20:03.326 "delay_cmd_submit": true, 00:20:03.326 "transport_retry_count": 4, 00:20:03.326 "bdev_retry_count": 3, 00:20:03.326 "transport_ack_timeout": 0, 00:20:03.326 "ctrlr_loss_timeout_sec": 0, 00:20:03.326 "reconnect_delay_sec": 0, 00:20:03.326 "fast_io_fail_timeout_sec": 0, 00:20:03.326 "disable_auto_failback": false, 00:20:03.326 "generate_uuids": false, 00:20:03.326 "transport_tos": 0, 00:20:03.326 "nvme_error_stat": false, 00:20:03.326 "rdma_srq_size": 0, 00:20:03.326 "io_path_stat": false, 00:20:03.326 "allow_accel_sequence": false, 00:20:03.326 "rdma_max_cq_size": 0, 00:20:03.326 "rdma_cm_event_timeout_ms": 0, 00:20:03.326 "dhchap_digests": [ 00:20:03.326 "sha256", 00:20:03.326 "sha384", 00:20:03.326 "sha512" 00:20:03.326 ], 00:20:03.326 "dhchap_dhgroups": [ 00:20:03.326 "null", 00:20:03.326 "ffdhe2048", 00:20:03.326 "ffdhe3072", 00:20:03.326 "ffdhe4096", 00:20:03.326 "ffdhe6144", 00:20:03.326 "ffdhe8192" 00:20:03.326 ] 00:20:03.326 } 00:20:03.326 }, 00:20:03.326 { 00:20:03.326 "method": "bdev_nvme_attach_controller", 00:20:03.326 "params": { 00:20:03.326 "name": "TLSTEST", 00:20:03.326 "trtype": "TCP", 00:20:03.326 "adrfam": "IPv4", 00:20:03.326 "traddr": "10.0.0.2", 00:20:03.326 "trsvcid": "4420", 00:20:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.326 "prchk_reftag": false, 00:20:03.326 "prchk_guard": false, 00:20:03.326 "ctrlr_loss_timeout_sec": 0, 00:20:03.326 "reconnect_delay_sec": 0, 00:20:03.326 "fast_io_fail_timeout_sec": 0, 00:20:03.326 "psk": "key0", 00:20:03.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.326 "hdgst": false, 00:20:03.326 "ddgst": false, 00:20:03.326 "multipath": "multipath" 00:20:03.326 } 00:20:03.326 }, 00:20:03.326 { 00:20:03.326 "method": "bdev_nvme_set_hotplug", 00:20:03.326 "params": { 00:20:03.326 "period_us": 100000, 00:20:03.326 "enable": false 00:20:03.326 } 00:20:03.326 }, 00:20:03.326 { 00:20:03.326 "method": "bdev_wait_for_examine" 00:20:03.326 } 00:20:03.326 ] 00:20:03.326 }, 00:20:03.326 { 00:20:03.326 "subsystem": "nbd", 00:20:03.326 "config": [] 00:20:03.326 } 00:20:03.326 ] 00:20:03.326 }' 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2170784 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2170784 ']' 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2170784 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2170784 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2170784' 00:20:03.326 killing process with pid 2170784 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2170784 00:20:03.326 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.326 00:20:03.326 Latency(us) 00:20:03.326 [2024-12-06T12:27:49.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.326 [2024-12-06T12:27:49.985Z] =================================================================================================================== 00:20:03.326 [2024-12-06T12:27:49.985Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2170784 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2170269 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2170269 ']' 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2170269 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2170269 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2170269' 00:20:03.326 killing process with pid 2170269 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2170269 00:20:03.326 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2170269 00:20:03.588 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:03.588 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:03.588 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.588 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.588 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:03.588 "subsystems": [ 00:20:03.588 { 00:20:03.588 "subsystem": "keyring", 00:20:03.588 "config": [ 00:20:03.588 { 00:20:03.588 "method": "keyring_file_add_key", 00:20:03.588 "params": { 00:20:03.588 "name": "key0", 00:20:03.588 "path": "/tmp/tmp.iiraHCqYml" 00:20:03.588 } 00:20:03.588 } 00:20:03.588 ] 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "subsystem": "iobuf", 00:20:03.588 "config": [ 00:20:03.588 { 00:20:03.588 "method": "iobuf_set_options", 00:20:03.588 "params": { 00:20:03.588 "small_pool_count": 8192, 00:20:03.588 "large_pool_count": 1024, 00:20:03.588 "small_bufsize": 8192, 00:20:03.588 "large_bufsize": 135168, 00:20:03.588 "enable_numa": false 00:20:03.588 } 00:20:03.588 } 00:20:03.588 ] 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "subsystem": "sock", 00:20:03.588 "config": [ 00:20:03.588 { 00:20:03.588 "method": "sock_set_default_impl", 00:20:03.588 "params": { 00:20:03.588 "impl_name": "posix" 00:20:03.588 } 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "method": "sock_impl_set_options", 00:20:03.588 "params": { 00:20:03.588 "impl_name": "ssl", 00:20:03.588 "recv_buf_size": 4096, 00:20:03.588 "send_buf_size": 4096, 00:20:03.588 "enable_recv_pipe": true, 00:20:03.588 "enable_quickack": false, 00:20:03.588 "enable_placement_id": 0, 00:20:03.588 "enable_zerocopy_send_server": true, 00:20:03.588 "enable_zerocopy_send_client": false, 00:20:03.588 "zerocopy_threshold": 0, 00:20:03.588 "tls_version": 0, 00:20:03.588 "enable_ktls": false 00:20:03.588 } 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "method": "sock_impl_set_options", 00:20:03.588 "params": { 00:20:03.588 "impl_name": "posix", 00:20:03.588 "recv_buf_size": 2097152, 00:20:03.588 "send_buf_size": 2097152, 00:20:03.588 "enable_recv_pipe": true, 00:20:03.588 "enable_quickack": false, 00:20:03.588 "enable_placement_id": 0, 00:20:03.588 "enable_zerocopy_send_server": true, 00:20:03.588 "enable_zerocopy_send_client": false, 00:20:03.588 "zerocopy_threshold": 0, 00:20:03.588 "tls_version": 0, 00:20:03.588 "enable_ktls": false 00:20:03.588 } 00:20:03.588 } 00:20:03.588 ] 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "subsystem": "vmd", 00:20:03.588 "config": [] 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "subsystem": "accel", 00:20:03.588 "config": [ 00:20:03.588 { 00:20:03.588 "method": "accel_set_options", 00:20:03.588 "params": { 00:20:03.588 "small_cache_size": 128, 00:20:03.588 "large_cache_size": 16, 00:20:03.588 "task_count": 2048, 00:20:03.588 "sequence_count": 2048, 00:20:03.588 "buf_count": 2048 00:20:03.588 } 00:20:03.588 } 00:20:03.588 ] 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "subsystem": "bdev", 00:20:03.588 "config": [ 00:20:03.588 { 00:20:03.588 "method": "bdev_set_options", 00:20:03.588 "params": { 00:20:03.588 "bdev_io_pool_size": 65535, 00:20:03.588 "bdev_io_cache_size": 256, 00:20:03.588 "bdev_auto_examine": true, 00:20:03.588 "iobuf_small_cache_size": 128, 00:20:03.588 "iobuf_large_cache_size": 16 00:20:03.588 } 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "method": "bdev_raid_set_options", 00:20:03.588 "params": { 00:20:03.588 "process_window_size_kb": 1024, 00:20:03.588 "process_max_bandwidth_mb_sec": 0 00:20:03.588 } 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "method": "bdev_iscsi_set_options", 00:20:03.588 "params": { 00:20:03.588 "timeout_sec": 30 00:20:03.588 } 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "method": "bdev_nvme_set_options", 00:20:03.588 "params": { 00:20:03.588 "action_on_timeout": "none", 00:20:03.588 "timeout_us": 0, 00:20:03.588 "timeout_admin_us": 0, 00:20:03.588 "keep_alive_timeout_ms": 10000, 00:20:03.588 "arbitration_burst": 0, 00:20:03.588 "low_priority_weight": 0, 00:20:03.588 "medium_priority_weight": 0, 00:20:03.588 "high_priority_weight": 0, 00:20:03.588 "nvme_adminq_poll_period_us": 10000, 00:20:03.588 "nvme_ioq_poll_period_us": 0, 00:20:03.588 "io_queue_requests": 0, 00:20:03.588 "delay_cmd_submit": true, 00:20:03.588 "transport_retry_count": 4, 00:20:03.588 "bdev_retry_count": 3, 00:20:03.588 "transport_ack_timeout": 0, 00:20:03.588 "ctrlr_loss_timeout_sec": 0, 00:20:03.588 "reconnect_delay_sec": 0, 00:20:03.588 "fast_io_fail_timeout_sec": 0, 00:20:03.588 "disable_auto_failback": false, 00:20:03.588 "generate_uuids": false, 00:20:03.588 "transport_tos": 0, 00:20:03.588 "nvme_error_stat": false, 00:20:03.588 "rdma_srq_size": 0, 00:20:03.588 "io_path_stat": false, 00:20:03.588 "allow_accel_sequence": false, 00:20:03.588 "rdma_max_cq_size": 0, 00:20:03.588 "rdma_cm_event_timeout_ms": 0, 00:20:03.588 "dhchap_digests": [ 00:20:03.588 "sha256", 00:20:03.588 "sha384", 00:20:03.588 "sha512" 00:20:03.588 ], 00:20:03.588 "dhchap_dhgroups": [ 00:20:03.588 "null", 00:20:03.588 "ffdhe2048", 00:20:03.588 "ffdhe3072", 00:20:03.588 "ffdhe4096", 00:20:03.588 "ffdhe6144", 00:20:03.588 "ffdhe8192" 00:20:03.588 ] 00:20:03.588 } 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "method": "bdev_nvme_set_hotplug", 00:20:03.588 "params": { 00:20:03.588 "period_us": 100000, 00:20:03.588 "enable": false 00:20:03.588 } 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "method": "bdev_malloc_create", 00:20:03.588 "params": { 00:20:03.588 "name": "malloc0", 00:20:03.588 "num_blocks": 8192, 00:20:03.588 "block_size": 4096, 00:20:03.588 "physical_block_size": 4096, 00:20:03.588 "uuid": "05d45815-c660-4484-8554-ae4121bb6e60", 00:20:03.588 "optimal_io_boundary": 0, 00:20:03.588 "md_size": 0, 00:20:03.588 "dif_type": 0, 00:20:03.588 "dif_is_head_of_md": false, 00:20:03.588 "dif_pi_format": 0 00:20:03.588 } 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "method": "bdev_wait_for_examine" 00:20:03.588 } 00:20:03.588 ] 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "subsystem": "nbd", 00:20:03.588 "config": [] 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "subsystem": "scheduler", 00:20:03.588 "config": [ 00:20:03.588 { 00:20:03.588 "method": "framework_set_scheduler", 00:20:03.588 "params": { 00:20:03.588 "name": "static" 00:20:03.588 } 00:20:03.588 } 00:20:03.588 ] 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "subsystem": "nvmf", 00:20:03.588 "config": [ 00:20:03.588 { 00:20:03.588 "method": "nvmf_set_config", 00:20:03.588 "params": { 00:20:03.588 "discovery_filter": "match_any", 00:20:03.588 "admin_cmd_passthru": { 00:20:03.588 "identify_ctrlr": false 00:20:03.588 }, 00:20:03.588 "dhchap_digests": [ 00:20:03.588 "sha256", 00:20:03.588 "sha384", 00:20:03.588 "sha512" 00:20:03.588 ], 00:20:03.588 "dhchap_dhgroups": [ 00:20:03.588 "null", 00:20:03.588 "ffdhe2048", 00:20:03.588 "ffdhe3072", 00:20:03.588 "ffdhe4096", 00:20:03.588 "ffdhe6144", 00:20:03.588 "ffdhe8192" 00:20:03.588 ] 00:20:03.588 } 00:20:03.588 }, 00:20:03.588 { 00:20:03.588 "method": "nvmf_set_max_subsystems", 00:20:03.589 "params": { 00:20:03.589 "max_subsystems": 1024 00:20:03.589 } 00:20:03.589 }, 00:20:03.589 { 00:20:03.589 "method": "nvmf_set_crdt", 00:20:03.589 "params": { 00:20:03.589 "crdt1": 0, 00:20:03.589 "crdt2": 0, 00:20:03.589 "crdt3": 0 00:20:03.589 } 00:20:03.589 }, 00:20:03.589 { 00:20:03.589 "method": "nvmf_create_transport", 00:20:03.589 "params": { 00:20:03.589 "trtype": "TCP", 00:20:03.589 "max_queue_depth": 128, 00:20:03.589 "max_io_qpairs_per_ctrlr": 127, 00:20:03.589 "in_capsule_data_size": 4096, 00:20:03.589 "max_io_size": 131072, 00:20:03.589 "io_unit_size": 131072, 00:20:03.589 "max_aq_depth": 128, 00:20:03.589 "num_shared_buffers": 511, 00:20:03.589 "buf_cache_size": 4294967295, 00:20:03.589 "dif_insert_or_strip": false, 00:20:03.589 "zcopy": false, 00:20:03.589 "c2h_success": false, 00:20:03.589 "sock_priority": 0, 00:20:03.589 "abort_timeout_sec": 1, 00:20:03.589 "ack_timeout": 0, 00:20:03.589 "data_wr_pool_size": 0 00:20:03.589 } 00:20:03.589 }, 00:20:03.589 { 00:20:03.589 "method": "nvmf_create_subsystem", 00:20:03.589 "params": { 00:20:03.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.589 "allow_any_host": false, 00:20:03.589 "serial_number": "SPDK00000000000001", 00:20:03.589 "model_number": "SPDK bdev Controller", 00:20:03.589 "max_namespaces": 10, 00:20:03.589 "min_cntlid": 1, 00:20:03.589 "max_cntlid": 65519, 00:20:03.589 "ana_reporting": false 00:20:03.589 } 00:20:03.589 }, 00:20:03.589 { 00:20:03.589 "method": "nvmf_subsystem_add_host", 00:20:03.589 "params": { 00:20:03.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.589 "host": "nqn.2016-06.io.spdk:host1", 00:20:03.589 "psk": "key0" 00:20:03.589 } 00:20:03.589 }, 00:20:03.589 { 00:20:03.589 "method": "nvmf_subsystem_add_ns", 00:20:03.589 "params": { 00:20:03.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.589 "namespace": { 00:20:03.589 "nsid": 1, 00:20:03.589 "bdev_name": "malloc0", 00:20:03.589 "nguid": "05D45815C66044848554AE4121BB6E60", 00:20:03.589 "uuid": "05d45815-c660-4484-8554-ae4121bb6e60", 00:20:03.589 "no_auto_visible": false 00:20:03.589 } 00:20:03.589 } 00:20:03.589 }, 00:20:03.589 { 00:20:03.589 "method": "nvmf_subsystem_add_listener", 00:20:03.589 "params": { 00:20:03.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.589 "listen_address": { 00:20:03.589 "trtype": "TCP", 00:20:03.589 "adrfam": "IPv4", 00:20:03.589 "traddr": "10.0.0.2", 00:20:03.589 "trsvcid": "4420" 00:20:03.589 }, 00:20:03.589 "secure_channel": true 00:20:03.589 } 00:20:03.589 } 00:20:03.589 ] 00:20:03.589 } 00:20:03.589 ] 00:20:03.589 }' 00:20:03.589 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2171290 00:20:03.589 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2171290 00:20:03.589 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:03.589 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2171290 ']' 00:20:03.589 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.589 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.589 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.589 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.589 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.589 [2024-12-06 13:27:50.125267] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:03.589 [2024-12-06 13:27:50.125327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.589 [2024-12-06 13:27:50.214366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.849 [2024-12-06 13:27:50.246542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.849 [2024-12-06 13:27:50.246569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.849 [2024-12-06 13:27:50.246575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.850 [2024-12-06 13:27:50.246580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.850 [2024-12-06 13:27:50.246584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.850 [2024-12-06 13:27:50.247106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.850 [2024-12-06 13:27:50.440316] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.850 [2024-12-06 13:27:50.472337] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.850 [2024-12-06 13:27:50.472540] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2171325 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2171325 /var/tmp/bdevperf.sock 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2171325 ']' 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.421 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:04.421 "subsystems": [ 00:20:04.421 { 00:20:04.421 "subsystem": "keyring", 00:20:04.421 "config": [ 00:20:04.421 { 00:20:04.421 "method": "keyring_file_add_key", 00:20:04.421 "params": { 00:20:04.421 "name": "key0", 00:20:04.421 "path": "/tmp/tmp.iiraHCqYml" 00:20:04.421 } 00:20:04.421 } 00:20:04.421 ] 00:20:04.421 }, 00:20:04.421 { 00:20:04.421 "subsystem": "iobuf", 00:20:04.421 "config": [ 00:20:04.421 { 00:20:04.421 "method": "iobuf_set_options", 00:20:04.421 "params": { 00:20:04.421 "small_pool_count": 8192, 00:20:04.421 "large_pool_count": 1024, 00:20:04.421 "small_bufsize": 8192, 00:20:04.421 "large_bufsize": 135168, 00:20:04.421 "enable_numa": false 00:20:04.421 } 00:20:04.421 } 00:20:04.421 ] 00:20:04.421 }, 00:20:04.421 { 00:20:04.421 "subsystem": "sock", 00:20:04.421 "config": [ 00:20:04.421 { 00:20:04.421 "method": "sock_set_default_impl", 00:20:04.421 "params": { 00:20:04.421 "impl_name": "posix" 00:20:04.421 } 00:20:04.421 }, 00:20:04.421 { 00:20:04.421 "method": "sock_impl_set_options", 00:20:04.421 "params": { 00:20:04.421 "impl_name": "ssl", 00:20:04.421 "recv_buf_size": 4096, 00:20:04.421 "send_buf_size": 4096, 00:20:04.421 "enable_recv_pipe": true, 00:20:04.421 "enable_quickack": false, 00:20:04.421 "enable_placement_id": 0, 00:20:04.421 "enable_zerocopy_send_server": true, 00:20:04.421 "enable_zerocopy_send_client": false, 00:20:04.421 "zerocopy_threshold": 0, 00:20:04.421 "tls_version": 0, 00:20:04.421 "enable_ktls": false 00:20:04.421 } 00:20:04.421 }, 00:20:04.421 { 00:20:04.421 "method": "sock_impl_set_options", 00:20:04.421 "params": { 00:20:04.421 "impl_name": "posix", 00:20:04.421 "recv_buf_size": 2097152, 00:20:04.421 "send_buf_size": 2097152, 00:20:04.421 "enable_recv_pipe": true, 00:20:04.421 "enable_quickack": false, 00:20:04.421 "enable_placement_id": 0, 00:20:04.421 "enable_zerocopy_send_server": true, 00:20:04.421 "enable_zerocopy_send_client": false, 00:20:04.421 "zerocopy_threshold": 0, 00:20:04.421 "tls_version": 0, 00:20:04.421 "enable_ktls": false 00:20:04.421 } 00:20:04.421 } 00:20:04.421 ] 00:20:04.421 }, 00:20:04.421 { 00:20:04.421 "subsystem": "vmd", 00:20:04.421 "config": [] 00:20:04.421 }, 00:20:04.421 { 00:20:04.421 "subsystem": "accel", 00:20:04.421 "config": [ 00:20:04.421 { 00:20:04.421 "method": "accel_set_options", 00:20:04.421 "params": { 00:20:04.421 "small_cache_size": 128, 00:20:04.421 "large_cache_size": 16, 00:20:04.421 "task_count": 2048, 00:20:04.421 "sequence_count": 2048, 00:20:04.421 "buf_count": 2048 00:20:04.421 } 00:20:04.421 } 00:20:04.421 ] 00:20:04.421 }, 00:20:04.421 { 00:20:04.421 "subsystem": "bdev", 00:20:04.421 "config": [ 00:20:04.421 { 00:20:04.421 "method": "bdev_set_options", 00:20:04.421 "params": { 00:20:04.421 "bdev_io_pool_size": 65535, 00:20:04.421 "bdev_io_cache_size": 256, 00:20:04.422 "bdev_auto_examine": true, 00:20:04.422 "iobuf_small_cache_size": 128, 00:20:04.422 "iobuf_large_cache_size": 16 00:20:04.422 } 00:20:04.422 }, 00:20:04.422 { 00:20:04.422 "method": "bdev_raid_set_options", 00:20:04.422 "params": { 00:20:04.422 "process_window_size_kb": 1024, 00:20:04.422 "process_max_bandwidth_mb_sec": 0 00:20:04.422 } 00:20:04.422 }, 00:20:04.422 { 00:20:04.422 "method": "bdev_iscsi_set_options", 00:20:04.422 "params": { 00:20:04.422 "timeout_sec": 30 00:20:04.422 } 00:20:04.422 }, 00:20:04.422 { 00:20:04.422 "method": "bdev_nvme_set_options", 00:20:04.422 "params": { 00:20:04.422 "action_on_timeout": "none", 00:20:04.422 "timeout_us": 0, 00:20:04.422 "timeout_admin_us": 0, 00:20:04.422 "keep_alive_timeout_ms": 10000, 00:20:04.422 "arbitration_burst": 0, 00:20:04.422 "low_priority_weight": 0, 00:20:04.422 "medium_priority_weight": 0, 00:20:04.422 "high_priority_weight": 0, 00:20:04.422 "nvme_adminq_poll_period_us": 10000, 00:20:04.422 "nvme_ioq_poll_period_us": 0, 00:20:04.422 "io_queue_requests": 512, 00:20:04.422 "delay_cmd_submit": true, 00:20:04.422 "transport_retry_count": 4, 00:20:04.422 "bdev_retry_count": 3, 00:20:04.422 "transport_ack_timeout": 0, 00:20:04.422 "ctrlr_loss_timeout_sec": 0, 00:20:04.422 "reconnect_delay_sec": 0, 00:20:04.422 "fast_io_fail_timeout_sec": 0, 00:20:04.422 "disable_auto_failback": false, 00:20:04.422 "generate_uuids": false, 00:20:04.422 "transport_tos": 0, 00:20:04.422 "nvme_error_stat": false, 00:20:04.422 "rdma_srq_size": 0, 00:20:04.422 "io_path_stat": false, 00:20:04.422 "allow_accel_sequence": false, 00:20:04.422 "rdma_max_cq_size": 0, 00:20:04.422 "rdma_cm_event_timeout_ms": 0, 00:20:04.422 "dhchap_digests": [ 00:20:04.422 "sha256", 00:20:04.422 "sha384", 00:20:04.422 "sha512" 00:20:04.422 ], 00:20:04.422 "dhchap_dhgroups": [ 00:20:04.422 "null", 00:20:04.422 "ffdhe2048", 00:20:04.422 "ffdhe3072", 00:20:04.422 "ffdhe4096", 00:20:04.422 "ffdhe6144", 00:20:04.422 "ffdhe8192" 00:20:04.422 ] 00:20:04.422 } 00:20:04.422 }, 00:20:04.422 { 00:20:04.422 "method": "bdev_nvme_attach_controller", 00:20:04.422 "params": { 00:20:04.422 "name": "TLSTEST", 00:20:04.422 "trtype": "TCP", 00:20:04.422 "adrfam": "IPv4", 00:20:04.422 "traddr": "10.0.0.2", 00:20:04.422 "trsvcid": "4420", 00:20:04.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.422 "prchk_reftag": false, 00:20:04.422 "prchk_guard": false, 00:20:04.422 "ctrlr_loss_timeout_sec": 0, 00:20:04.422 "reconnect_delay_sec": 0, 00:20:04.422 "fast_io_fail_timeout_sec": 0, 00:20:04.422 "psk": "key0", 00:20:04.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.422 "hdgst": false, 00:20:04.422 "ddgst": false, 00:20:04.422 "multipath": "multipath" 00:20:04.422 } 00:20:04.422 }, 00:20:04.422 { 00:20:04.422 "method": "bdev_nvme_set_hotplug", 00:20:04.422 "params": { 00:20:04.422 "period_us": 100000, 00:20:04.422 "enable": false 00:20:04.422 } 00:20:04.422 }, 00:20:04.422 { 00:20:04.422 "method": "bdev_wait_for_examine" 00:20:04.422 } 00:20:04.422 ] 00:20:04.422 }, 00:20:04.422 { 00:20:04.422 "subsystem": "nbd", 00:20:04.422 "config": [] 00:20:04.422 } 00:20:04.422 ] 00:20:04.422 }' 00:20:04.422 [2024-12-06 13:27:51.013074] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:04.422 [2024-12-06 13:27:51.013139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171325 ] 00:20:04.682 [2024-12-06 13:27:51.099782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.682 [2024-12-06 13:27:51.129074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.683 [2024-12-06 13:27:51.263973] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.253 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.253 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:05.253 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:05.253 Running I/O for 10 seconds... 00:20:07.579 5981.00 IOPS, 23.36 MiB/s [2024-12-06T12:27:55.197Z] 5892.00 IOPS, 23.02 MiB/s [2024-12-06T12:27:56.143Z] 5966.67 IOPS, 23.31 MiB/s [2024-12-06T12:27:57.086Z] 6066.50 IOPS, 23.70 MiB/s [2024-12-06T12:27:58.028Z] 6018.00 IOPS, 23.51 MiB/s [2024-12-06T12:27:58.972Z] 5999.17 IOPS, 23.43 MiB/s [2024-12-06T12:27:59.914Z] 6001.14 IOPS, 23.44 MiB/s [2024-12-06T12:28:01.299Z] 6031.50 IOPS, 23.56 MiB/s [2024-12-06T12:28:02.243Z] 6028.11 IOPS, 23.55 MiB/s [2024-12-06T12:28:02.243Z] 6051.40 IOPS, 23.64 MiB/s 00:20:15.584 Latency(us) 00:20:15.584 [2024-12-06T12:28:02.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.584 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:15.584 Verification LBA range: start 0x0 length 0x2000 00:20:15.584 TLSTESTn1 : 10.02 6054.21 23.65 0.00 0.00 21107.59 5597.87 23592.96 00:20:15.584 [2024-12-06T12:28:02.243Z] =================================================================================================================== 00:20:15.584 [2024-12-06T12:28:02.243Z] Total : 6054.21 23.65 0.00 0.00 21107.59 5597.87 23592.96 00:20:15.584 { 00:20:15.584 "results": [ 00:20:15.584 { 00:20:15.584 "job": "TLSTESTn1", 00:20:15.584 "core_mask": "0x4", 00:20:15.584 "workload": "verify", 00:20:15.584 "status": "finished", 00:20:15.584 "verify_range": { 00:20:15.584 "start": 0, 00:20:15.584 "length": 8192 00:20:15.584 }, 00:20:15.584 "queue_depth": 128, 00:20:15.584 "io_size": 4096, 00:20:15.584 "runtime": 10.016341, 00:20:15.584 "iops": 6054.206820634401, 00:20:15.584 "mibps": 23.64924539310313, 00:20:15.584 "io_failed": 0, 00:20:15.584 "io_timeout": 0, 00:20:15.584 "avg_latency_us": 21107.585696366044, 00:20:15.584 "min_latency_us": 5597.866666666667, 00:20:15.584 "max_latency_us": 23592.96 00:20:15.584 } 00:20:15.584 ], 00:20:15.584 "core_count": 1 00:20:15.584 } 00:20:15.584 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.584 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2171325 00:20:15.584 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2171325 ']' 00:20:15.584 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2171325 00:20:15.584 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:15.584 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.584 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2171325 00:20:15.584 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:15.584 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:15.584 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2171325' 00:20:15.584 killing process with pid 2171325 00:20:15.584 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2171325 00:20:15.584 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.584 00:20:15.584 Latency(us) 00:20:15.584 [2024-12-06T12:28:02.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.584 [2024-12-06T12:28:02.243Z] =================================================================================================================== 00:20:15.584 [2024-12-06T12:28:02.243Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.584 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2171325 00:20:15.584 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2171290 00:20:15.584 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2171290 ']' 00:20:15.584 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2171290 00:20:15.584 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:15.584 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.584 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2171290 00:20:15.584 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:15.584 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:15.584 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2171290' 00:20:15.584 killing process with pid 2171290 00:20:15.584 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2171290 00:20:15.584 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2171290 00:20:15.846 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:15.846 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:15.846 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.846 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.846 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2173662 00:20:15.846 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2173662 00:20:15.846 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:15.846 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2173662 ']' 00:20:15.846 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.846 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.846 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.846 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.846 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.846 [2024-12-06 13:28:02.336916] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:15.846 [2024-12-06 13:28:02.336972] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.846 [2024-12-06 13:28:02.428842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.846 [2024-12-06 13:28:02.475215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.846 [2024-12-06 13:28:02.475266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.846 [2024-12-06 13:28:02.475274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.846 [2024-12-06 13:28:02.475282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.846 [2024-12-06 13:28:02.475288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.846 [2024-12-06 13:28:02.476079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.789 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.789 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:16.789 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:16.789 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.789 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.789 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.789 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.iiraHCqYml 00:20:16.789 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iiraHCqYml 00:20:16.789 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:16.789 [2024-12-06 13:28:03.351470] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.790 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:17.050 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:17.312 [2024-12-06 13:28:03.748464] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.312 [2024-12-06 13:28:03.748798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.312 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:17.312 malloc0 00:20:17.573 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:17.573 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iiraHCqYml 00:20:17.834 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:18.095 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:18.095 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2174029 00:20:18.095 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.095 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2174029 /var/tmp/bdevperf.sock 00:20:18.095 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2174029 ']' 00:20:18.095 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.095 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.095 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.095 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.095 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.095 [2024-12-06 13:28:04.588090] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:18.095 [2024-12-06 13:28:04.588154] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174029 ] 00:20:18.095 [2024-12-06 13:28:04.669538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.095 [2024-12-06 13:28:04.703062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.356 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.356 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:18.356 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iiraHCqYml 00:20:18.356 13:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:18.620 [2024-12-06 13:28:05.104607] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.620 nvme0n1 00:20:18.620 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:18.883 Running I/O for 1 seconds... 00:20:19.924 3813.00 IOPS, 14.89 MiB/s 00:20:19.924 Latency(us) 00:20:19.924 [2024-12-06T12:28:06.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.924 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:19.924 Verification LBA range: start 0x0 length 0x2000 00:20:19.924 nvme0n1 : 1.02 3867.51 15.11 0.00 0.00 32828.85 4560.21 83449.17 00:20:19.924 [2024-12-06T12:28:06.583Z] =================================================================================================================== 00:20:19.924 [2024-12-06T12:28:06.583Z] Total : 3867.51 15.11 0.00 0.00 32828.85 4560.21 83449.17 00:20:19.924 { 00:20:19.924 "results": [ 00:20:19.924 { 00:20:19.924 "job": "nvme0n1", 00:20:19.924 "core_mask": "0x2", 00:20:19.924 "workload": "verify", 00:20:19.924 "status": "finished", 00:20:19.924 "verify_range": { 00:20:19.924 "start": 0, 00:20:19.924 "length": 8192 00:20:19.924 }, 00:20:19.924 "queue_depth": 128, 00:20:19.924 "io_size": 4096, 00:20:19.924 "runtime": 1.019003, 00:20:19.924 "iops": 3867.505787519762, 00:20:19.924 "mibps": 15.107444482499071, 00:20:19.924 "io_failed": 0, 00:20:19.924 "io_timeout": 0, 00:20:19.924 "avg_latency_us": 32828.852778482615, 00:20:19.924 "min_latency_us": 4560.213333333333, 00:20:19.924 "max_latency_us": 83449.17333333334 00:20:19.924 } 00:20:19.924 ], 00:20:19.924 "core_count": 1 00:20:19.924 } 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2174029 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2174029 ']' 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2174029 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2174029 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2174029' 00:20:19.924 killing process with pid 2174029 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2174029 00:20:19.924 Received shutdown signal, test time was about 1.000000 seconds 00:20:19.924 00:20:19.924 Latency(us) 00:20:19.924 [2024-12-06T12:28:06.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.924 [2024-12-06T12:28:06.583Z] =================================================================================================================== 00:20:19.924 [2024-12-06T12:28:06.583Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2174029 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2173662 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2173662 ']' 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2173662 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2173662 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2173662' 00:20:19.924 killing process with pid 2173662 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2173662 00:20:19.924 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2173662 00:20:20.234 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:20.234 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:20.234 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:20.234 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.234 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2174387 00:20:20.234 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2174387 00:20:20.234 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:20.234 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2174387 ']' 00:20:20.234 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.234 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.234 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.234 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.234 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.234 [2024-12-06 13:28:06.766768] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:20.234 [2024-12-06 13:28:06.766836] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.234 [2024-12-06 13:28:06.865226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.497 [2024-12-06 13:28:06.915996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.497 [2024-12-06 13:28:06.916053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.497 [2024-12-06 13:28:06.916063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.497 [2024-12-06 13:28:06.916071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.497 [2024-12-06 13:28:06.916077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.497 [2024-12-06 13:28:06.916842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.068 [2024-12-06 13:28:07.620388] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.068 malloc0 00:20:21.068 [2024-12-06 13:28:07.650434] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.068 [2024-12-06 13:28:07.650776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2174734 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2174734 /var/tmp/bdevperf.sock 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2174734 ']' 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.068 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.327 [2024-12-06 13:28:07.735878] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:21.327 [2024-12-06 13:28:07.735946] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174734 ] 00:20:21.327 [2024-12-06 13:28:07.823251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.327 [2024-12-06 13:28:07.857444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.897 13:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.897 13:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:21.897 13:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iiraHCqYml 00:20:22.157 13:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:22.417 [2024-12-06 13:28:08.868422] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.417 nvme0n1 00:20:22.417 13:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:22.417 Running I/O for 1 seconds... 00:20:23.799 5583.00 IOPS, 21.81 MiB/s 00:20:23.799 Latency(us) 00:20:23.799 [2024-12-06T12:28:10.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.799 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:23.799 Verification LBA range: start 0x0 length 0x2000 00:20:23.799 nvme0n1 : 1.02 5585.39 21.82 0.00 0.00 22683.39 4696.75 23156.05 00:20:23.799 [2024-12-06T12:28:10.458Z] =================================================================================================================== 00:20:23.799 [2024-12-06T12:28:10.458Z] Total : 5585.39 21.82 0.00 0.00 22683.39 4696.75 23156.05 00:20:23.799 { 00:20:23.799 "results": [ 00:20:23.799 { 00:20:23.799 "job": "nvme0n1", 00:20:23.799 "core_mask": "0x2", 00:20:23.799 "workload": "verify", 00:20:23.799 "status": "finished", 00:20:23.799 "verify_range": { 00:20:23.799 "start": 0, 00:20:23.799 "length": 8192 00:20:23.799 }, 00:20:23.799 "queue_depth": 128, 00:20:23.799 "io_size": 4096, 00:20:23.799 "runtime": 1.022489, 00:20:23.799 "iops": 5585.390160676545, 00:20:23.799 "mibps": 21.817930315142753, 00:20:23.799 "io_failed": 0, 00:20:23.799 "io_timeout": 0, 00:20:23.799 "avg_latency_us": 22683.386456545846, 00:20:23.799 "min_latency_us": 4696.746666666667, 00:20:23.799 "max_latency_us": 23156.053333333333 00:20:23.799 } 00:20:23.799 ], 00:20:23.799 "core_count": 1 00:20:23.799 } 00:20:23.799 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:23.799 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.799 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.799 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.799 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:23.799 "subsystems": [ 00:20:23.799 { 00:20:23.799 "subsystem": "keyring", 00:20:23.799 "config": [ 00:20:23.799 { 00:20:23.799 "method": "keyring_file_add_key", 00:20:23.799 "params": { 00:20:23.799 "name": "key0", 00:20:23.799 "path": "/tmp/tmp.iiraHCqYml" 00:20:23.799 } 00:20:23.799 } 00:20:23.799 ] 00:20:23.799 }, 00:20:23.799 { 00:20:23.799 "subsystem": "iobuf", 00:20:23.799 "config": [ 00:20:23.799 { 00:20:23.799 "method": "iobuf_set_options", 00:20:23.799 "params": { 00:20:23.799 "small_pool_count": 8192, 00:20:23.799 "large_pool_count": 1024, 00:20:23.799 "small_bufsize": 8192, 00:20:23.799 "large_bufsize": 135168, 00:20:23.799 "enable_numa": false 00:20:23.799 } 00:20:23.799 } 00:20:23.799 ] 00:20:23.799 }, 00:20:23.799 { 00:20:23.799 "subsystem": "sock", 00:20:23.799 "config": [ 00:20:23.799 { 00:20:23.799 "method": "sock_set_default_impl", 00:20:23.799 "params": { 00:20:23.799 "impl_name": "posix" 00:20:23.799 } 00:20:23.799 }, 00:20:23.799 { 00:20:23.799 "method": "sock_impl_set_options", 00:20:23.799 "params": { 00:20:23.799 "impl_name": "ssl", 00:20:23.799 "recv_buf_size": 4096, 00:20:23.799 "send_buf_size": 4096, 00:20:23.799 "enable_recv_pipe": true, 00:20:23.799 "enable_quickack": false, 00:20:23.799 "enable_placement_id": 0, 00:20:23.799 "enable_zerocopy_send_server": true, 00:20:23.799 "enable_zerocopy_send_client": false, 00:20:23.799 "zerocopy_threshold": 0, 00:20:23.799 "tls_version": 0, 00:20:23.799 "enable_ktls": false 00:20:23.799 } 00:20:23.799 }, 00:20:23.799 { 00:20:23.799 "method": "sock_impl_set_options", 00:20:23.799 "params": { 00:20:23.799 "impl_name": "posix", 00:20:23.799 "recv_buf_size": 2097152, 00:20:23.799 "send_buf_size": 2097152, 00:20:23.799 "enable_recv_pipe": true, 00:20:23.799 "enable_quickack": false, 00:20:23.799 "enable_placement_id": 0, 00:20:23.799 "enable_zerocopy_send_server": true, 00:20:23.799 "enable_zerocopy_send_client": false, 00:20:23.799 "zerocopy_threshold": 0, 00:20:23.799 "tls_version": 0, 00:20:23.799 "enable_ktls": false 00:20:23.799 } 00:20:23.799 } 00:20:23.799 ] 00:20:23.799 }, 00:20:23.799 { 00:20:23.799 "subsystem": "vmd", 00:20:23.799 "config": [] 00:20:23.799 }, 00:20:23.799 { 00:20:23.799 "subsystem": "accel", 00:20:23.799 "config": [ 00:20:23.799 { 00:20:23.799 "method": "accel_set_options", 00:20:23.799 "params": { 00:20:23.799 "small_cache_size": 128, 00:20:23.799 "large_cache_size": 16, 00:20:23.799 "task_count": 2048, 00:20:23.799 "sequence_count": 2048, 00:20:23.799 "buf_count": 2048 00:20:23.799 } 00:20:23.799 } 00:20:23.799 ] 00:20:23.799 }, 00:20:23.799 { 00:20:23.799 "subsystem": "bdev", 00:20:23.799 "config": [ 00:20:23.799 { 00:20:23.799 "method": "bdev_set_options", 00:20:23.799 "params": { 00:20:23.799 "bdev_io_pool_size": 65535, 00:20:23.799 "bdev_io_cache_size": 256, 00:20:23.799 "bdev_auto_examine": true, 00:20:23.799 "iobuf_small_cache_size": 128, 00:20:23.799 "iobuf_large_cache_size": 16 00:20:23.799 } 00:20:23.799 }, 00:20:23.799 { 00:20:23.799 "method": "bdev_raid_set_options", 00:20:23.799 "params": { 00:20:23.799 "process_window_size_kb": 1024, 00:20:23.799 "process_max_bandwidth_mb_sec": 0 00:20:23.799 } 00:20:23.799 }, 00:20:23.799 { 00:20:23.799 "method": "bdev_iscsi_set_options", 00:20:23.800 "params": { 00:20:23.800 "timeout_sec": 30 00:20:23.800 } 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "method": "bdev_nvme_set_options", 00:20:23.800 "params": { 00:20:23.800 "action_on_timeout": "none", 00:20:23.800 "timeout_us": 0, 00:20:23.800 "timeout_admin_us": 0, 00:20:23.800 "keep_alive_timeout_ms": 10000, 00:20:23.800 "arbitration_burst": 0, 00:20:23.800 "low_priority_weight": 0, 00:20:23.800 "medium_priority_weight": 0, 00:20:23.800 "high_priority_weight": 0, 00:20:23.800 "nvme_adminq_poll_period_us": 10000, 00:20:23.800 "nvme_ioq_poll_period_us": 0, 00:20:23.800 "io_queue_requests": 0, 00:20:23.800 "delay_cmd_submit": true, 00:20:23.800 "transport_retry_count": 4, 00:20:23.800 "bdev_retry_count": 3, 00:20:23.800 "transport_ack_timeout": 0, 00:20:23.800 "ctrlr_loss_timeout_sec": 0, 00:20:23.800 "reconnect_delay_sec": 0, 00:20:23.800 "fast_io_fail_timeout_sec": 0, 00:20:23.800 "disable_auto_failback": false, 00:20:23.800 "generate_uuids": false, 00:20:23.800 "transport_tos": 0, 00:20:23.800 "nvme_error_stat": false, 00:20:23.800 "rdma_srq_size": 0, 00:20:23.800 "io_path_stat": false, 00:20:23.800 "allow_accel_sequence": false, 00:20:23.800 "rdma_max_cq_size": 0, 00:20:23.800 "rdma_cm_event_timeout_ms": 0, 00:20:23.800 "dhchap_digests": [ 00:20:23.800 "sha256", 00:20:23.800 "sha384", 00:20:23.800 "sha512" 00:20:23.800 ], 00:20:23.800 "dhchap_dhgroups": [ 00:20:23.800 "null", 00:20:23.800 "ffdhe2048", 00:20:23.800 "ffdhe3072", 00:20:23.800 "ffdhe4096", 00:20:23.800 "ffdhe6144", 00:20:23.800 "ffdhe8192" 00:20:23.800 ] 00:20:23.800 } 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "method": "bdev_nvme_set_hotplug", 00:20:23.800 "params": { 00:20:23.800 "period_us": 100000, 00:20:23.800 "enable": false 00:20:23.800 } 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "method": "bdev_malloc_create", 00:20:23.800 "params": { 00:20:23.800 "name": "malloc0", 00:20:23.800 "num_blocks": 8192, 00:20:23.800 "block_size": 4096, 00:20:23.800 "physical_block_size": 4096, 00:20:23.800 "uuid": "9dea51d9-87b4-435a-892a-fe7c81c04991", 00:20:23.800 "optimal_io_boundary": 0, 00:20:23.800 "md_size": 0, 00:20:23.800 "dif_type": 0, 00:20:23.800 "dif_is_head_of_md": false, 00:20:23.800 "dif_pi_format": 0 00:20:23.800 } 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "method": "bdev_wait_for_examine" 00:20:23.800 } 00:20:23.800 ] 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "subsystem": "nbd", 00:20:23.800 "config": [] 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "subsystem": "scheduler", 00:20:23.800 "config": [ 00:20:23.800 { 00:20:23.800 "method": "framework_set_scheduler", 00:20:23.800 "params": { 00:20:23.800 "name": "static" 00:20:23.800 } 00:20:23.800 } 00:20:23.800 ] 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "subsystem": "nvmf", 00:20:23.800 "config": [ 00:20:23.800 { 00:20:23.800 "method": "nvmf_set_config", 00:20:23.800 "params": { 00:20:23.800 "discovery_filter": "match_any", 00:20:23.800 "admin_cmd_passthru": { 00:20:23.800 "identify_ctrlr": false 00:20:23.800 }, 00:20:23.800 "dhchap_digests": [ 00:20:23.800 "sha256", 00:20:23.800 "sha384", 00:20:23.800 "sha512" 00:20:23.800 ], 00:20:23.800 "dhchap_dhgroups": [ 00:20:23.800 "null", 00:20:23.800 "ffdhe2048", 00:20:23.800 "ffdhe3072", 00:20:23.800 "ffdhe4096", 00:20:23.800 "ffdhe6144", 00:20:23.800 "ffdhe8192" 00:20:23.800 ] 00:20:23.800 } 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "method": "nvmf_set_max_subsystems", 00:20:23.800 "params": { 00:20:23.800 "max_subsystems": 1024 00:20:23.800 } 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "method": "nvmf_set_crdt", 00:20:23.800 "params": { 00:20:23.800 "crdt1": 0, 00:20:23.800 "crdt2": 0, 00:20:23.800 "crdt3": 0 00:20:23.800 } 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "method": "nvmf_create_transport", 00:20:23.800 "params": { 00:20:23.800 "trtype": "TCP", 00:20:23.800 "max_queue_depth": 128, 00:20:23.800 "max_io_qpairs_per_ctrlr": 127, 00:20:23.800 "in_capsule_data_size": 4096, 00:20:23.800 "max_io_size": 131072, 00:20:23.800 "io_unit_size": 131072, 00:20:23.800 "max_aq_depth": 128, 00:20:23.800 "num_shared_buffers": 511, 00:20:23.800 "buf_cache_size": 4294967295, 00:20:23.800 "dif_insert_or_strip": false, 00:20:23.800 "zcopy": false, 00:20:23.800 "c2h_success": false, 00:20:23.800 "sock_priority": 0, 00:20:23.800 "abort_timeout_sec": 1, 00:20:23.800 "ack_timeout": 0, 00:20:23.800 "data_wr_pool_size": 0 00:20:23.800 } 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "method": "nvmf_create_subsystem", 00:20:23.800 "params": { 00:20:23.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.800 "allow_any_host": false, 00:20:23.800 "serial_number": "00000000000000000000", 00:20:23.800 "model_number": "SPDK bdev Controller", 00:20:23.800 "max_namespaces": 32, 00:20:23.800 "min_cntlid": 1, 00:20:23.800 "max_cntlid": 65519, 00:20:23.800 "ana_reporting": false 00:20:23.800 } 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "method": "nvmf_subsystem_add_host", 00:20:23.800 "params": { 00:20:23.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.800 "host": "nqn.2016-06.io.spdk:host1", 00:20:23.800 "psk": "key0" 00:20:23.800 } 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "method": "nvmf_subsystem_add_ns", 00:20:23.800 "params": { 00:20:23.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.800 "namespace": { 00:20:23.800 "nsid": 1, 00:20:23.800 "bdev_name": "malloc0", 00:20:23.800 "nguid": "9DEA51D987B4435A892AFE7C81C04991", 00:20:23.800 "uuid": "9dea51d9-87b4-435a-892a-fe7c81c04991", 00:20:23.800 "no_auto_visible": false 00:20:23.800 } 00:20:23.800 } 00:20:23.800 }, 00:20:23.800 { 00:20:23.800 "method": "nvmf_subsystem_add_listener", 00:20:23.800 "params": { 00:20:23.800 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.800 "listen_address": { 00:20:23.800 "trtype": "TCP", 00:20:23.800 "adrfam": "IPv4", 00:20:23.800 "traddr": "10.0.0.2", 00:20:23.800 "trsvcid": "4420" 00:20:23.800 }, 00:20:23.800 "secure_channel": false, 00:20:23.800 "sock_impl": "ssl" 00:20:23.800 } 00:20:23.800 } 00:20:23.800 ] 00:20:23.800 } 00:20:23.800 ] 00:20:23.800 }' 00:20:23.800 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:24.063 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:24.063 "subsystems": [ 00:20:24.063 { 00:20:24.063 "subsystem": "keyring", 00:20:24.063 "config": [ 00:20:24.063 { 00:20:24.063 "method": "keyring_file_add_key", 00:20:24.063 "params": { 00:20:24.063 "name": "key0", 00:20:24.063 "path": "/tmp/tmp.iiraHCqYml" 00:20:24.063 } 00:20:24.063 } 00:20:24.063 ] 00:20:24.063 }, 00:20:24.063 { 00:20:24.063 "subsystem": "iobuf", 00:20:24.063 "config": [ 00:20:24.063 { 00:20:24.063 "method": "iobuf_set_options", 00:20:24.063 "params": { 00:20:24.063 "small_pool_count": 8192, 00:20:24.063 "large_pool_count": 1024, 00:20:24.063 "small_bufsize": 8192, 00:20:24.063 "large_bufsize": 135168, 00:20:24.063 "enable_numa": false 00:20:24.063 } 00:20:24.063 } 00:20:24.063 ] 00:20:24.063 }, 00:20:24.063 { 00:20:24.063 "subsystem": "sock", 00:20:24.063 "config": [ 00:20:24.063 { 00:20:24.063 "method": "sock_set_default_impl", 00:20:24.063 "params": { 00:20:24.063 "impl_name": "posix" 00:20:24.063 } 00:20:24.063 }, 00:20:24.063 { 00:20:24.063 "method": "sock_impl_set_options", 00:20:24.063 "params": { 00:20:24.063 "impl_name": "ssl", 00:20:24.063 "recv_buf_size": 4096, 00:20:24.063 "send_buf_size": 4096, 00:20:24.063 "enable_recv_pipe": true, 00:20:24.063 "enable_quickack": false, 00:20:24.063 "enable_placement_id": 0, 00:20:24.063 "enable_zerocopy_send_server": true, 00:20:24.063 "enable_zerocopy_send_client": false, 00:20:24.063 "zerocopy_threshold": 0, 00:20:24.063 "tls_version": 0, 00:20:24.063 "enable_ktls": false 00:20:24.063 } 00:20:24.063 }, 00:20:24.063 { 00:20:24.063 "method": "sock_impl_set_options", 00:20:24.063 "params": { 00:20:24.063 "impl_name": "posix", 00:20:24.063 "recv_buf_size": 2097152, 00:20:24.063 "send_buf_size": 2097152, 00:20:24.063 "enable_recv_pipe": true, 00:20:24.063 "enable_quickack": false, 00:20:24.063 "enable_placement_id": 0, 00:20:24.063 "enable_zerocopy_send_server": true, 00:20:24.063 "enable_zerocopy_send_client": false, 00:20:24.063 "zerocopy_threshold": 0, 00:20:24.063 "tls_version": 0, 00:20:24.063 "enable_ktls": false 00:20:24.063 } 00:20:24.063 } 00:20:24.063 ] 00:20:24.063 }, 00:20:24.063 { 00:20:24.063 "subsystem": "vmd", 00:20:24.063 "config": [] 00:20:24.063 }, 00:20:24.063 { 00:20:24.063 "subsystem": "accel", 00:20:24.063 "config": [ 00:20:24.063 { 00:20:24.063 "method": "accel_set_options", 00:20:24.063 "params": { 00:20:24.063 "small_cache_size": 128, 00:20:24.063 "large_cache_size": 16, 00:20:24.063 "task_count": 2048, 00:20:24.063 "sequence_count": 2048, 00:20:24.063 "buf_count": 2048 00:20:24.063 } 00:20:24.063 } 00:20:24.063 ] 00:20:24.063 }, 00:20:24.063 { 00:20:24.063 "subsystem": "bdev", 00:20:24.063 "config": [ 00:20:24.063 { 00:20:24.063 "method": "bdev_set_options", 00:20:24.063 "params": { 00:20:24.063 "bdev_io_pool_size": 65535, 00:20:24.063 "bdev_io_cache_size": 256, 00:20:24.063 "bdev_auto_examine": true, 00:20:24.063 "iobuf_small_cache_size": 128, 00:20:24.063 "iobuf_large_cache_size": 16 00:20:24.063 } 00:20:24.063 }, 00:20:24.063 { 00:20:24.063 "method": "bdev_raid_set_options", 00:20:24.063 "params": { 00:20:24.063 "process_window_size_kb": 1024, 00:20:24.063 "process_max_bandwidth_mb_sec": 0 00:20:24.063 } 00:20:24.063 }, 00:20:24.063 { 00:20:24.063 "method": "bdev_iscsi_set_options", 00:20:24.063 "params": { 00:20:24.063 "timeout_sec": 30 00:20:24.063 } 00:20:24.063 }, 00:20:24.063 { 00:20:24.063 "method": "bdev_nvme_set_options", 00:20:24.063 "params": { 00:20:24.063 "action_on_timeout": "none", 00:20:24.063 "timeout_us": 0, 00:20:24.063 "timeout_admin_us": 0, 00:20:24.063 "keep_alive_timeout_ms": 10000, 00:20:24.063 "arbitration_burst": 0, 00:20:24.063 "low_priority_weight": 0, 00:20:24.063 "medium_priority_weight": 0, 00:20:24.063 "high_priority_weight": 0, 00:20:24.063 "nvme_adminq_poll_period_us": 10000, 00:20:24.063 "nvme_ioq_poll_period_us": 0, 00:20:24.063 "io_queue_requests": 512, 00:20:24.063 "delay_cmd_submit": true, 00:20:24.063 "transport_retry_count": 4, 00:20:24.063 "bdev_retry_count": 3, 00:20:24.063 "transport_ack_timeout": 0, 00:20:24.063 "ctrlr_loss_timeout_sec": 0, 00:20:24.063 "reconnect_delay_sec": 0, 00:20:24.063 "fast_io_fail_timeout_sec": 0, 00:20:24.063 "disable_auto_failback": false, 00:20:24.063 "generate_uuids": false, 00:20:24.063 "transport_tos": 0, 00:20:24.063 "nvme_error_stat": false, 00:20:24.063 "rdma_srq_size": 0, 00:20:24.063 "io_path_stat": false, 00:20:24.063 "allow_accel_sequence": false, 00:20:24.063 "rdma_max_cq_size": 0, 00:20:24.063 "rdma_cm_event_timeout_ms": 0, 00:20:24.063 "dhchap_digests": [ 00:20:24.063 "sha256", 00:20:24.063 "sha384", 00:20:24.063 "sha512" 00:20:24.063 ], 00:20:24.063 "dhchap_dhgroups": [ 00:20:24.063 "null", 00:20:24.063 "ffdhe2048", 00:20:24.063 "ffdhe3072", 00:20:24.063 "ffdhe4096", 00:20:24.063 "ffdhe6144", 00:20:24.063 "ffdhe8192" 00:20:24.063 ] 00:20:24.063 } 00:20:24.063 }, 00:20:24.063 { 00:20:24.063 "method": "bdev_nvme_attach_controller", 00:20:24.063 "params": { 00:20:24.063 "name": "nvme0", 00:20:24.063 "trtype": "TCP", 00:20:24.063 "adrfam": "IPv4", 00:20:24.063 "traddr": "10.0.0.2", 00:20:24.063 "trsvcid": "4420", 00:20:24.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.063 "prchk_reftag": false, 00:20:24.063 "prchk_guard": false, 00:20:24.063 "ctrlr_loss_timeout_sec": 0, 00:20:24.063 "reconnect_delay_sec": 0, 00:20:24.063 "fast_io_fail_timeout_sec": 0, 00:20:24.063 "psk": "key0", 00:20:24.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.063 "hdgst": false, 00:20:24.063 "ddgst": false, 00:20:24.063 "multipath": "multipath" 00:20:24.063 } 00:20:24.063 }, 00:20:24.063 { 00:20:24.063 "method": "bdev_nvme_set_hotplug", 00:20:24.063 "params": { 00:20:24.063 "period_us": 100000, 00:20:24.063 "enable": false 00:20:24.063 } 00:20:24.063 }, 00:20:24.063 { 00:20:24.063 "method": "bdev_enable_histogram", 00:20:24.063 "params": { 00:20:24.064 "name": "nvme0n1", 00:20:24.064 "enable": true 00:20:24.064 } 00:20:24.064 }, 00:20:24.064 { 00:20:24.064 "method": "bdev_wait_for_examine" 00:20:24.064 } 00:20:24.064 ] 00:20:24.064 }, 00:20:24.064 { 00:20:24.064 "subsystem": "nbd", 00:20:24.064 "config": [] 00:20:24.064 } 00:20:24.064 ] 00:20:24.064 }' 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2174734 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2174734 ']' 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2174734 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2174734 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2174734' 00:20:24.064 killing process with pid 2174734 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2174734 00:20:24.064 Received shutdown signal, test time was about 1.000000 seconds 00:20:24.064 00:20:24.064 Latency(us) 00:20:24.064 [2024-12-06T12:28:10.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.064 [2024-12-06T12:28:10.723Z] =================================================================================================================== 00:20:24.064 [2024-12-06T12:28:10.723Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2174734 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2174387 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2174387 ']' 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2174387 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2174387 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2174387' 00:20:24.064 killing process with pid 2174387 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2174387 00:20:24.064 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2174387 00:20:24.325 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:24.325 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:24.325 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:24.325 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:24.325 "subsystems": [ 00:20:24.325 { 00:20:24.325 "subsystem": "keyring", 00:20:24.325 "config": [ 00:20:24.325 { 00:20:24.325 "method": "keyring_file_add_key", 00:20:24.325 "params": { 00:20:24.325 "name": "key0", 00:20:24.325 "path": "/tmp/tmp.iiraHCqYml" 00:20:24.325 } 00:20:24.325 } 00:20:24.325 ] 00:20:24.325 }, 00:20:24.325 { 00:20:24.325 "subsystem": "iobuf", 00:20:24.325 "config": [ 00:20:24.325 { 00:20:24.325 "method": "iobuf_set_options", 00:20:24.325 "params": { 00:20:24.325 "small_pool_count": 8192, 00:20:24.325 "large_pool_count": 1024, 00:20:24.325 "small_bufsize": 8192, 00:20:24.325 "large_bufsize": 135168, 00:20:24.325 "enable_numa": false 00:20:24.325 } 00:20:24.325 } 00:20:24.325 ] 00:20:24.325 }, 00:20:24.325 { 00:20:24.325 "subsystem": "sock", 00:20:24.325 "config": [ 00:20:24.325 { 00:20:24.325 "method": "sock_set_default_impl", 00:20:24.325 "params": { 00:20:24.325 "impl_name": "posix" 00:20:24.325 } 00:20:24.325 }, 00:20:24.325 { 00:20:24.325 "method": "sock_impl_set_options", 00:20:24.325 "params": { 00:20:24.325 "impl_name": "ssl", 00:20:24.325 "recv_buf_size": 4096, 00:20:24.325 "send_buf_size": 4096, 00:20:24.325 "enable_recv_pipe": true, 00:20:24.325 "enable_quickack": false, 00:20:24.325 "enable_placement_id": 0, 00:20:24.325 "enable_zerocopy_send_server": true, 00:20:24.325 "enable_zerocopy_send_client": false, 00:20:24.325 "zerocopy_threshold": 0, 00:20:24.325 "tls_version": 0, 00:20:24.325 "enable_ktls": false 00:20:24.325 } 00:20:24.325 }, 00:20:24.325 { 00:20:24.325 "method": "sock_impl_set_options", 00:20:24.325 "params": { 00:20:24.325 "impl_name": "posix", 00:20:24.325 "recv_buf_size": 2097152, 00:20:24.325 "send_buf_size": 2097152, 00:20:24.325 "enable_recv_pipe": true, 00:20:24.325 "enable_quickack": false, 00:20:24.325 "enable_placement_id": 0, 00:20:24.325 "enable_zerocopy_send_server": true, 00:20:24.325 "enable_zerocopy_send_client": false, 00:20:24.325 "zerocopy_threshold": 0, 00:20:24.325 "tls_version": 0, 00:20:24.325 "enable_ktls": false 00:20:24.325 } 00:20:24.325 } 00:20:24.325 ] 00:20:24.325 }, 00:20:24.325 { 00:20:24.325 "subsystem": "vmd", 00:20:24.325 "config": [] 00:20:24.325 }, 00:20:24.325 { 00:20:24.325 "subsystem": "accel", 00:20:24.325 "config": [ 00:20:24.325 { 00:20:24.325 "method": "accel_set_options", 00:20:24.325 "params": { 00:20:24.325 "small_cache_size": 128, 00:20:24.325 "large_cache_size": 16, 00:20:24.325 "task_count": 2048, 00:20:24.325 "sequence_count": 2048, 00:20:24.325 "buf_count": 2048 00:20:24.325 } 00:20:24.325 } 00:20:24.325 ] 00:20:24.325 }, 00:20:24.325 { 00:20:24.325 "subsystem": "bdev", 00:20:24.325 "config": [ 00:20:24.325 { 00:20:24.325 "method": "bdev_set_options", 00:20:24.325 "params": { 00:20:24.325 "bdev_io_pool_size": 65535, 00:20:24.325 "bdev_io_cache_size": 256, 00:20:24.325 "bdev_auto_examine": true, 00:20:24.325 "iobuf_small_cache_size": 128, 00:20:24.325 "iobuf_large_cache_size": 16 00:20:24.325 } 00:20:24.325 }, 00:20:24.325 { 00:20:24.325 "method": "bdev_raid_set_options", 00:20:24.325 "params": { 00:20:24.325 "process_window_size_kb": 1024, 00:20:24.325 "process_max_bandwidth_mb_sec": 0 00:20:24.325 } 00:20:24.325 }, 00:20:24.325 { 00:20:24.325 "method": "bdev_iscsi_set_options", 00:20:24.325 "params": { 00:20:24.325 "timeout_sec": 30 00:20:24.325 } 00:20:24.325 }, 00:20:24.325 { 00:20:24.325 "method": "bdev_nvme_set_options", 00:20:24.326 "params": { 00:20:24.326 "action_on_timeout": "none", 00:20:24.326 "timeout_us": 0, 00:20:24.326 "timeout_admin_us": 0, 00:20:24.326 "keep_alive_timeout_ms": 10000, 00:20:24.326 "arbitration_burst": 0, 00:20:24.326 "low_priority_weight": 0, 00:20:24.326 "medium_priority_weight": 0, 00:20:24.326 "high_priority_weight": 0, 00:20:24.326 "nvme_adminq_poll_period_us": 10000, 00:20:24.326 "nvme_ioq_poll_period_us": 0, 00:20:24.326 "io_queue_requests": 0, 00:20:24.326 "delay_cmd_submit": true, 00:20:24.326 "transport_retry_count": 4, 00:20:24.326 "bdev_retry_count": 3, 00:20:24.326 "transport_ack_timeout": 0, 00:20:24.326 "ctrlr_loss_timeout_sec": 0, 00:20:24.326 "reconnect_delay_sec": 0, 00:20:24.326 "fast_io_fail_timeout_sec": 0, 00:20:24.326 "disable_auto_failback": false, 00:20:24.326 "generate_uuids": false, 00:20:24.326 "transport_tos": 0, 00:20:24.326 "nvme_error_stat": false, 00:20:24.326 "rdma_srq_size": 0, 00:20:24.326 "io_path_stat": false, 00:20:24.326 "allow_accel_sequence": false, 00:20:24.326 "rdma_max_cq_size": 0, 00:20:24.326 "rdma_cm_event_timeout_ms": 0, 00:20:24.326 "dhchap_digests": [ 00:20:24.326 "sha256", 00:20:24.326 "sha384", 00:20:24.326 "sha512" 00:20:24.326 ], 00:20:24.326 "dhchap_dhgroups": [ 00:20:24.326 "null", 00:20:24.326 "ffdhe2048", 00:20:24.326 "ffdhe3072", 00:20:24.326 "ffdhe4096", 00:20:24.326 "ffdhe6144", 00:20:24.326 "ffdhe8192" 00:20:24.326 ] 00:20:24.326 } 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "method": "bdev_nvme_set_hotplug", 00:20:24.326 "params": { 00:20:24.326 "period_us": 100000, 00:20:24.326 "enable": false 00:20:24.326 } 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "method": "bdev_malloc_create", 00:20:24.326 "params": { 00:20:24.326 "name": "malloc0", 00:20:24.326 "num_blocks": 8192, 00:20:24.326 "block_size": 4096, 00:20:24.326 "physical_block_size": 4096, 00:20:24.326 "uuid": "9dea51d9-87b4-435a-892a-fe7c81c04991", 00:20:24.326 "optimal_io_boundary": 0, 00:20:24.326 "md_size": 0, 00:20:24.326 "dif_type": 0, 00:20:24.326 "dif_is_head_of_md": false, 00:20:24.326 "dif_pi_format": 0 00:20:24.326 } 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "method": "bdev_wait_for_examine" 00:20:24.326 } 00:20:24.326 ] 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "subsystem": "nbd", 00:20:24.326 "config": [] 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "subsystem": "scheduler", 00:20:24.326 "config": [ 00:20:24.326 { 00:20:24.326 "method": "framework_set_scheduler", 00:20:24.326 "params": { 00:20:24.326 "name": "static" 00:20:24.326 } 00:20:24.326 } 00:20:24.326 ] 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "subsystem": "nvmf", 00:20:24.326 "config": [ 00:20:24.326 { 00:20:24.326 "method": "nvmf_set_config", 00:20:24.326 "params": { 00:20:24.326 "discovery_filter": "match_any", 00:20:24.326 "admin_cmd_passthru": { 00:20:24.326 "identify_ctrlr": false 00:20:24.326 }, 00:20:24.326 "dhchap_digests": [ 00:20:24.326 "sha256", 00:20:24.326 "sha384", 00:20:24.326 "sha512" 00:20:24.326 ], 00:20:24.326 "dhchap_dhgroups": [ 00:20:24.326 "null", 00:20:24.326 "ffdhe2048", 00:20:24.326 "ffdhe3072", 00:20:24.326 "ffdhe4096", 00:20:24.326 "ffdhe6144", 00:20:24.326 "ffdhe8192" 00:20:24.326 ] 00:20:24.326 } 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "method": "nvmf_set_max_subsystems", 00:20:24.326 "params": { 00:20:24.326 "max_subsystems": 1024 00:20:24.326 } 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "method": "nvmf_set_crdt", 00:20:24.326 "params": { 00:20:24.326 "crdt1": 0, 00:20:24.326 "crdt2": 0, 00:20:24.326 "crdt3": 0 00:20:24.326 } 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "method": "nvmf_create_transport", 00:20:24.326 "params": { 00:20:24.326 "trtype": "TCP", 00:20:24.326 "max_queue_depth": 128, 00:20:24.326 "max_io_qpairs_per_ctrlr": 127, 00:20:24.326 "in_capsule_data_size": 4096, 00:20:24.326 "max_io_size": 131072, 00:20:24.326 "io_unit_size": 131072, 00:20:24.326 "max_aq_depth": 128, 00:20:24.326 "num_shared_buffers": 511, 00:20:24.326 "buf_cache_size": 4294967295, 00:20:24.326 "dif_insert_or_strip": false, 00:20:24.326 "zcopy": false, 00:20:24.326 "c2h_success": false, 00:20:24.326 "sock_priority": 0, 00:20:24.326 "abort_timeout_sec": 1, 00:20:24.326 "ack_timeout": 0, 00:20:24.326 "data_wr_pool_size": 0 00:20:24.326 } 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "method": "nvmf_create_subsystem", 00:20:24.326 "params": { 00:20:24.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.326 "allow_any_host": false, 00:20:24.326 "serial_number": "00000000000000000000", 00:20:24.326 "model_number": "SPDK bdev Controller", 00:20:24.326 "max_namespaces": 32, 00:20:24.326 "min_cntlid": 1, 00:20:24.326 "max_cntlid": 65519, 00:20:24.326 "ana_reporting": false 00:20:24.326 } 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "method": "nvmf_subsystem_add_host", 00:20:24.326 "params": { 00:20:24.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.326 "host": "nqn.2016-06.io.spdk:host1", 00:20:24.326 "psk": "key0" 00:20:24.326 } 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "method": "nvmf_subsystem_add_ns", 00:20:24.326 "params": { 00:20:24.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.326 "namespace": { 00:20:24.326 "nsid": 1, 00:20:24.326 "bdev_name": "malloc0", 00:20:24.326 "nguid": "9DEA51D987B4435A892AFE7C81C04991", 00:20:24.326 "uuid": "9dea51d9-87b4-435a-892a-fe7c81c04991", 00:20:24.326 "no_auto_visible": false 00:20:24.326 } 00:20:24.326 } 00:20:24.326 }, 00:20:24.326 { 00:20:24.326 "method": "nvmf_subsystem_add_listener", 00:20:24.326 "params": { 00:20:24.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.326 "listen_address": { 00:20:24.326 "trtype": "TCP", 00:20:24.326 "adrfam": "IPv4", 00:20:24.326 "traddr": "10.0.0.2", 00:20:24.326 "trsvcid": "4420" 00:20:24.326 }, 00:20:24.326 "secure_channel": false, 00:20:24.326 "sock_impl": "ssl" 00:20:24.326 } 00:20:24.326 } 00:20:24.326 ] 00:20:24.326 } 00:20:24.326 ] 00:20:24.326 }' 00:20:24.326 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.326 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2175360 00:20:24.326 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2175360 00:20:24.326 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:24.326 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2175360 ']' 00:20:24.326 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.326 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.326 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.326 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.326 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.326 [2024-12-06 13:28:10.872903] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:24.326 [2024-12-06 13:28:10.872959] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.326 [2024-12-06 13:28:10.963681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.587 [2024-12-06 13:28:10.992638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.587 [2024-12-06 13:28:10.992665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.587 [2024-12-06 13:28:10.992671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.587 [2024-12-06 13:28:10.992676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.587 [2024-12-06 13:28:10.992680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.587 [2024-12-06 13:28:10.993143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.587 [2024-12-06 13:28:11.187011] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.587 [2024-12-06 13:28:11.219042] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.587 [2024-12-06 13:28:11.219247] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2175450 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2175450 /var/tmp/bdevperf.sock 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2175450 ']' 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.159 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:25.159 "subsystems": [ 00:20:25.159 { 00:20:25.159 "subsystem": "keyring", 00:20:25.159 "config": [ 00:20:25.159 { 00:20:25.159 "method": "keyring_file_add_key", 00:20:25.159 "params": { 00:20:25.159 "name": "key0", 00:20:25.159 "path": "/tmp/tmp.iiraHCqYml" 00:20:25.159 } 00:20:25.159 } 00:20:25.159 ] 00:20:25.159 }, 00:20:25.159 { 00:20:25.159 "subsystem": "iobuf", 00:20:25.159 "config": [ 00:20:25.159 { 00:20:25.159 "method": "iobuf_set_options", 00:20:25.159 "params": { 00:20:25.159 "small_pool_count": 8192, 00:20:25.159 "large_pool_count": 1024, 00:20:25.159 "small_bufsize": 8192, 00:20:25.159 "large_bufsize": 135168, 00:20:25.159 "enable_numa": false 00:20:25.159 } 00:20:25.159 } 00:20:25.159 ] 00:20:25.159 }, 00:20:25.159 { 00:20:25.159 "subsystem": "sock", 00:20:25.159 "config": [ 00:20:25.159 { 00:20:25.159 "method": "sock_set_default_impl", 00:20:25.159 "params": { 00:20:25.159 "impl_name": "posix" 00:20:25.159 } 00:20:25.159 }, 00:20:25.159 { 00:20:25.159 "method": "sock_impl_set_options", 00:20:25.160 "params": { 00:20:25.160 "impl_name": "ssl", 00:20:25.160 "recv_buf_size": 4096, 00:20:25.160 "send_buf_size": 4096, 00:20:25.160 "enable_recv_pipe": true, 00:20:25.160 "enable_quickack": false, 00:20:25.160 "enable_placement_id": 0, 00:20:25.160 "enable_zerocopy_send_server": true, 00:20:25.160 "enable_zerocopy_send_client": false, 00:20:25.160 "zerocopy_threshold": 0, 00:20:25.160 "tls_version": 0, 00:20:25.160 "enable_ktls": false 00:20:25.160 } 00:20:25.160 }, 00:20:25.160 { 00:20:25.160 "method": "sock_impl_set_options", 00:20:25.160 "params": { 00:20:25.160 "impl_name": "posix", 00:20:25.160 "recv_buf_size": 2097152, 00:20:25.160 "send_buf_size": 2097152, 00:20:25.160 "enable_recv_pipe": true, 00:20:25.160 "enable_quickack": false, 00:20:25.160 "enable_placement_id": 0, 00:20:25.160 "enable_zerocopy_send_server": true, 00:20:25.160 "enable_zerocopy_send_client": false, 00:20:25.160 "zerocopy_threshold": 0, 00:20:25.160 "tls_version": 0, 00:20:25.160 "enable_ktls": false 00:20:25.160 } 00:20:25.160 } 00:20:25.160 ] 00:20:25.160 }, 00:20:25.160 { 00:20:25.160 "subsystem": "vmd", 00:20:25.160 "config": [] 00:20:25.160 }, 00:20:25.160 { 00:20:25.160 "subsystem": "accel", 00:20:25.160 "config": [ 00:20:25.160 { 00:20:25.160 "method": "accel_set_options", 00:20:25.160 "params": { 00:20:25.160 "small_cache_size": 128, 00:20:25.160 "large_cache_size": 16, 00:20:25.160 "task_count": 2048, 00:20:25.160 "sequence_count": 2048, 00:20:25.160 "buf_count": 2048 00:20:25.160 } 00:20:25.160 } 00:20:25.160 ] 00:20:25.160 }, 00:20:25.160 { 00:20:25.160 "subsystem": "bdev", 00:20:25.160 "config": [ 00:20:25.160 { 00:20:25.160 "method": "bdev_set_options", 00:20:25.160 "params": { 00:20:25.160 "bdev_io_pool_size": 65535, 00:20:25.160 "bdev_io_cache_size": 256, 00:20:25.160 "bdev_auto_examine": true, 00:20:25.160 "iobuf_small_cache_size": 128, 00:20:25.160 "iobuf_large_cache_size": 16 00:20:25.160 } 00:20:25.160 }, 00:20:25.160 { 00:20:25.160 "method": "bdev_raid_set_options", 00:20:25.160 "params": { 00:20:25.160 "process_window_size_kb": 1024, 00:20:25.160 "process_max_bandwidth_mb_sec": 0 00:20:25.160 } 00:20:25.160 }, 00:20:25.160 { 00:20:25.160 "method": "bdev_iscsi_set_options", 00:20:25.160 "params": { 00:20:25.160 "timeout_sec": 30 00:20:25.160 } 00:20:25.160 }, 00:20:25.160 { 00:20:25.160 "method": "bdev_nvme_set_options", 00:20:25.160 "params": { 00:20:25.160 "action_on_timeout": "none", 00:20:25.160 "timeout_us": 0, 00:20:25.160 "timeout_admin_us": 0, 00:20:25.160 "keep_alive_timeout_ms": 10000, 00:20:25.160 "arbitration_burst": 0, 00:20:25.160 "low_priority_weight": 0, 00:20:25.160 "medium_priority_weight": 0, 00:20:25.160 "high_priority_weight": 0, 00:20:25.160 "nvme_adminq_poll_period_us": 10000, 00:20:25.160 "nvme_ioq_poll_period_us": 0, 00:20:25.160 "io_queue_requests": 512, 00:20:25.160 "delay_cmd_submit": true, 00:20:25.160 "transport_retry_count": 4, 00:20:25.160 "bdev_retry_count": 3, 00:20:25.160 "transport_ack_timeout": 0, 00:20:25.160 "ctrlr_loss_timeout_sec": 0, 00:20:25.160 "reconnect_delay_sec": 0, 00:20:25.160 "fast_io_fail_timeout_sec": 0, 00:20:25.160 "disable_auto_failback": false, 00:20:25.160 "generate_uuids": false, 00:20:25.160 "transport_tos": 0, 00:20:25.160 "nvme_error_stat": false, 00:20:25.160 "rdma_srq_size": 0, 00:20:25.160 "io_path_stat": false, 00:20:25.160 "allow_accel_sequence": false, 00:20:25.160 "rdma_max_cq_size": 0, 00:20:25.160 "rdma_cm_event_timeout_ms": 0, 00:20:25.160 "dhchap_digests": [ 00:20:25.160 "sha256", 00:20:25.160 "sha384", 00:20:25.160 "sha512" 00:20:25.160 ], 00:20:25.160 "dhchap_dhgroups": [ 00:20:25.160 "null", 00:20:25.160 "ffdhe2048", 00:20:25.160 "ffdhe3072", 00:20:25.160 "ffdhe4096", 00:20:25.160 "ffdhe6144", 00:20:25.160 "ffdhe8192" 00:20:25.160 ] 00:20:25.160 } 00:20:25.160 }, 00:20:25.160 { 00:20:25.160 "method": "bdev_nvme_attach_controller", 00:20:25.160 "params": { 00:20:25.160 "name": "nvme0", 00:20:25.160 "trtype": "TCP", 00:20:25.160 "adrfam": "IPv4", 00:20:25.160 "traddr": "10.0.0.2", 00:20:25.160 "trsvcid": "4420", 00:20:25.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.160 "prchk_reftag": false, 00:20:25.160 "prchk_guard": false, 00:20:25.160 "ctrlr_loss_timeout_sec": 0, 00:20:25.160 "reconnect_delay_sec": 0, 00:20:25.160 "fast_io_fail_timeout_sec": 0, 00:20:25.160 "psk": "key0", 00:20:25.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.160 "hdgst": false, 00:20:25.160 "ddgst": false, 00:20:25.160 "multipath": "multipath" 00:20:25.160 } 00:20:25.160 }, 00:20:25.160 { 00:20:25.160 "method": "bdev_nvme_set_hotplug", 00:20:25.160 "params": { 00:20:25.160 "period_us": 100000, 00:20:25.160 "enable": false 00:20:25.160 } 00:20:25.160 }, 00:20:25.160 { 00:20:25.160 "method": "bdev_enable_histogram", 00:20:25.160 "params": { 00:20:25.160 "name": "nvme0n1", 00:20:25.160 "enable": true 00:20:25.160 } 00:20:25.160 }, 00:20:25.160 { 00:20:25.160 "method": "bdev_wait_for_examine" 00:20:25.160 } 00:20:25.160 ] 00:20:25.160 }, 00:20:25.160 { 00:20:25.160 "subsystem": "nbd", 00:20:25.160 "config": [] 00:20:25.160 } 00:20:25.160 ] 00:20:25.160 }' 00:20:25.160 [2024-12-06 13:28:11.747507] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:25.160 [2024-12-06 13:28:11.747561] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175450 ] 00:20:25.421 [2024-12-06 13:28:11.833337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.421 [2024-12-06 13:28:11.863084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.421 [2024-12-06 13:28:11.998883] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.992 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.992 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:25.992 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:25.992 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:26.253 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.253 13:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:26.253 Running I/O for 1 seconds... 00:20:27.195 4940.00 IOPS, 19.30 MiB/s 00:20:27.195 Latency(us) 00:20:27.195 [2024-12-06T12:28:13.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.195 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:27.195 Verification LBA range: start 0x0 length 0x2000 00:20:27.195 nvme0n1 : 1.02 4950.11 19.34 0.00 0.00 25596.61 5952.85 24576.00 00:20:27.195 [2024-12-06T12:28:13.854Z] =================================================================================================================== 00:20:27.195 [2024-12-06T12:28:13.854Z] Total : 4950.11 19.34 0.00 0.00 25596.61 5952.85 24576.00 00:20:27.195 { 00:20:27.195 "results": [ 00:20:27.195 { 00:20:27.195 "job": "nvme0n1", 00:20:27.195 "core_mask": "0x2", 00:20:27.195 "workload": "verify", 00:20:27.195 "status": "finished", 00:20:27.195 "verify_range": { 00:20:27.195 "start": 0, 00:20:27.195 "length": 8192 00:20:27.195 }, 00:20:27.195 "queue_depth": 128, 00:20:27.195 "io_size": 4096, 00:20:27.195 "runtime": 1.023815, 00:20:27.195 "iops": 4950.113057534809, 00:20:27.195 "mibps": 19.336379130995347, 00:20:27.195 "io_failed": 0, 00:20:27.195 "io_timeout": 0, 00:20:27.195 "avg_latency_us": 25596.60552486188, 00:20:27.195 "min_latency_us": 5952.8533333333335, 00:20:27.195 "max_latency_us": 24576.0 00:20:27.195 } 00:20:27.195 ], 00:20:27.195 "core_count": 1 00:20:27.195 } 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:27.457 nvmf_trace.0 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2175450 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2175450 ']' 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2175450 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.457 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2175450 00:20:27.457 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:27.457 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:27.457 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2175450' 00:20:27.457 killing process with pid 2175450 00:20:27.457 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2175450 00:20:27.457 Received shutdown signal, test time was about 1.000000 seconds 00:20:27.457 00:20:27.457 Latency(us) 00:20:27.457 [2024-12-06T12:28:14.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.457 [2024-12-06T12:28:14.116Z] =================================================================================================================== 00:20:27.457 [2024-12-06T12:28:14.116Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.457 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2175450 00:20:27.718 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:27.718 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:27.718 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:27.718 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.718 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:27.718 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.718 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.718 rmmod nvme_tcp 00:20:27.718 rmmod nvme_fabrics 00:20:27.718 rmmod nvme_keyring 00:20:27.718 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.718 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:27.718 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2175360 ']' 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2175360 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2175360 ']' 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2175360 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2175360 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2175360' 00:20:27.719 killing process with pid 2175360 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2175360 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2175360 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:27.719 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:27.980 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:27.980 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:27.980 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.980 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.980 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.898 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:29.898 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TjylYQ9QYk /tmp/tmp.LOjW9muHfq /tmp/tmp.iiraHCqYml 00:20:29.898 00:20:29.898 real 1m27.402s 00:20:29.898 user 2m18.423s 00:20:29.898 sys 0m26.404s 00:20:29.898 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:29.898 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.898 ************************************ 00:20:29.898 END TEST nvmf_tls 00:20:29.898 ************************************ 00:20:29.898 13:28:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:29.898 13:28:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:29.898 13:28:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:29.898 13:28:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:29.898 ************************************ 00:20:29.898 START TEST nvmf_fips 00:20:29.898 ************************************ 00:20:29.898 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:30.160 * Looking for test storage... 00:20:30.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:30.160 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:30.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.161 --rc genhtml_branch_coverage=1 00:20:30.161 --rc genhtml_function_coverage=1 00:20:30.161 --rc genhtml_legend=1 00:20:30.161 --rc geninfo_all_blocks=1 00:20:30.161 --rc geninfo_unexecuted_blocks=1 00:20:30.161 00:20:30.161 ' 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:30.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.161 --rc genhtml_branch_coverage=1 00:20:30.161 --rc genhtml_function_coverage=1 00:20:30.161 --rc genhtml_legend=1 00:20:30.161 --rc geninfo_all_blocks=1 00:20:30.161 --rc geninfo_unexecuted_blocks=1 00:20:30.161 00:20:30.161 ' 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:30.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.161 --rc genhtml_branch_coverage=1 00:20:30.161 --rc genhtml_function_coverage=1 00:20:30.161 --rc genhtml_legend=1 00:20:30.161 --rc geninfo_all_blocks=1 00:20:30.161 --rc geninfo_unexecuted_blocks=1 00:20:30.161 00:20:30.161 ' 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:30.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.161 --rc genhtml_branch_coverage=1 00:20:30.161 --rc genhtml_function_coverage=1 00:20:30.161 --rc genhtml_legend=1 00:20:30.161 --rc geninfo_all_blocks=1 00:20:30.161 --rc geninfo_unexecuted_blocks=1 00:20:30.161 00:20:30.161 ' 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:30.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:30.161 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:30.162 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:30.423 Error setting digest 00:20:30.423 4002E8DFF47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:30.423 4002E8DFF47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:30.423 13:28:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.571 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:38.572 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:38.572 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:38.572 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:38.572 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:38.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:20:38.572 00:20:38.572 --- 10.0.0.2 ping statistics --- 00:20:38.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.572 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:20:38.572 00:20:38.572 --- 10.0.0.1 ping statistics --- 00:20:38.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.572 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2180180 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2180180 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2180180 ']' 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.572 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.572 [2024-12-06 13:28:24.566463] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:38.572 [2024-12-06 13:28:24.566537] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.572 [2024-12-06 13:28:24.665288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.572 [2024-12-06 13:28:24.715559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.572 [2024-12-06 13:28:24.715607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.572 [2024-12-06 13:28:24.715616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.572 [2024-12-06 13:28:24.715624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.572 [2024-12-06 13:28:24.715630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.572 [2024-12-06 13:28:24.716413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.832 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.832 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:38.832 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.832 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.832 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.832 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.832 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:38.832 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:38.832 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:38.832 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.M3d 00:20:38.832 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:38.832 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.M3d 00:20:38.833 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.M3d 00:20:38.833 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.M3d 00:20:38.833 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:39.106 [2024-12-06 13:28:25.576045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.106 [2024-12-06 13:28:25.592045] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.106 [2024-12-06 13:28:25.592357] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.106 malloc0 00:20:39.106 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.106 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2180519 00:20:39.106 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2180519 /var/tmp/bdevperf.sock 00:20:39.106 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.107 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2180519 ']' 00:20:39.107 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.107 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.107 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.107 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.107 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:39.107 [2024-12-06 13:28:25.735010] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:20:39.107 [2024-12-06 13:28:25.735078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180519 ] 00:20:39.366 [2024-12-06 13:28:25.827848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.366 [2024-12-06 13:28:25.878434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.937 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.937 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:39.937 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.M3d 00:20:40.198 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:40.459 [2024-12-06 13:28:26.881037] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.459 TLSTESTn1 00:20:40.459 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:40.459 Running I/O for 10 seconds... 00:20:42.782 4278.00 IOPS, 16.71 MiB/s [2024-12-06T12:28:30.379Z] 4418.50 IOPS, 17.26 MiB/s [2024-12-06T12:28:31.340Z] 4515.67 IOPS, 17.64 MiB/s [2024-12-06T12:28:32.278Z] 4801.00 IOPS, 18.75 MiB/s [2024-12-06T12:28:33.217Z] 5106.20 IOPS, 19.95 MiB/s [2024-12-06T12:28:34.155Z] 5154.17 IOPS, 20.13 MiB/s [2024-12-06T12:28:35.537Z] 5145.43 IOPS, 20.10 MiB/s [2024-12-06T12:28:36.106Z] 5117.50 IOPS, 19.99 MiB/s [2024-12-06T12:28:37.490Z] 5208.78 IOPS, 20.35 MiB/s [2024-12-06T12:28:37.490Z] 5186.20 IOPS, 20.26 MiB/s 00:20:50.831 Latency(us) 00:20:50.831 [2024-12-06T12:28:37.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.831 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:50.831 Verification LBA range: start 0x0 length 0x2000 00:20:50.831 TLSTESTn1 : 10.03 5184.95 20.25 0.00 0.00 24640.14 6553.60 40850.77 00:20:50.831 [2024-12-06T12:28:37.490Z] =================================================================================================================== 00:20:50.831 [2024-12-06T12:28:37.490Z] Total : 5184.95 20.25 0.00 0.00 24640.14 6553.60 40850.77 00:20:50.831 { 00:20:50.831 "results": [ 00:20:50.831 { 00:20:50.831 "job": "TLSTESTn1", 00:20:50.831 "core_mask": "0x4", 00:20:50.831 "workload": "verify", 00:20:50.831 "status": "finished", 00:20:50.831 "verify_range": { 00:20:50.831 "start": 0, 00:20:50.831 "length": 8192 00:20:50.831 }, 00:20:50.831 "queue_depth": 128, 00:20:50.831 "io_size": 4096, 00:20:50.831 "runtime": 10.027104, 00:20:50.831 "iops": 5184.946720409003, 00:20:50.831 "mibps": 20.25369812659767, 00:20:50.831 "io_failed": 0, 00:20:50.831 "io_timeout": 0, 00:20:50.831 "avg_latency_us": 24640.140465987046, 00:20:50.831 "min_latency_us": 6553.6, 00:20:50.831 "max_latency_us": 40850.77333333333 00:20:50.831 } 00:20:50.831 ], 00:20:50.831 "core_count": 1 00:20:50.831 } 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:50.831 nvmf_trace.0 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2180519 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2180519 ']' 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2180519 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2180519 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2180519' 00:20:50.831 killing process with pid 2180519 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2180519 00:20:50.831 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.831 00:20:50.831 Latency(us) 00:20:50.831 [2024-12-06T12:28:37.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.831 [2024-12-06T12:28:37.490Z] =================================================================================================================== 00:20:50.831 [2024-12-06T12:28:37.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2180519 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:50.831 rmmod nvme_tcp 00:20:50.831 rmmod nvme_fabrics 00:20:50.831 rmmod nvme_keyring 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2180180 ']' 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2180180 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2180180 ']' 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2180180 00:20:50.831 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2180180 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2180180' 00:20:51.092 killing process with pid 2180180 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2180180 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2180180 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.092 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.636 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:53.636 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.M3d 00:20:53.636 00:20:53.636 real 0m23.214s 00:20:53.636 user 0m24.721s 00:20:53.636 sys 0m9.827s 00:20:53.636 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.636 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.636 ************************************ 00:20:53.636 END TEST nvmf_fips 00:20:53.636 ************************************ 00:20:53.636 13:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:53.636 13:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:53.636 13:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.636 13:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:53.636 ************************************ 00:20:53.636 START TEST nvmf_control_msg_list 00:20:53.636 ************************************ 00:20:53.636 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:53.636 * Looking for test storage... 00:20:53.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:53.636 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:53.636 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:53.636 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:53.636 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:53.636 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:53.636 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:53.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.637 --rc genhtml_branch_coverage=1 00:20:53.637 --rc genhtml_function_coverage=1 00:20:53.637 --rc genhtml_legend=1 00:20:53.637 --rc geninfo_all_blocks=1 00:20:53.637 --rc geninfo_unexecuted_blocks=1 00:20:53.637 00:20:53.637 ' 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:53.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.637 --rc genhtml_branch_coverage=1 00:20:53.637 --rc genhtml_function_coverage=1 00:20:53.637 --rc genhtml_legend=1 00:20:53.637 --rc geninfo_all_blocks=1 00:20:53.637 --rc geninfo_unexecuted_blocks=1 00:20:53.637 00:20:53.637 ' 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:53.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.637 --rc genhtml_branch_coverage=1 00:20:53.637 --rc genhtml_function_coverage=1 00:20:53.637 --rc genhtml_legend=1 00:20:53.637 --rc geninfo_all_blocks=1 00:20:53.637 --rc geninfo_unexecuted_blocks=1 00:20:53.637 00:20:53.637 ' 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:53.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.637 --rc genhtml_branch_coverage=1 00:20:53.637 --rc genhtml_function_coverage=1 00:20:53.637 --rc genhtml_legend=1 00:20:53.637 --rc geninfo_all_blocks=1 00:20:53.637 --rc geninfo_unexecuted_blocks=1 00:20:53.637 00:20:53.637 ' 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:53.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:53.637 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:53.638 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.638 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.638 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.638 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:53.638 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:53.638 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:53.638 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:01.781 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:01.781 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:01.781 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:01.781 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:01.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:21:01.781 00:21:01.781 --- 10.0.0.2 ping statistics --- 00:21:01.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.781 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:21:01.781 00:21:01.781 --- 10.0.0.1 ping statistics --- 00:21:01.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.781 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2186874 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2186874 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2186874 ']' 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.781 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:01.781 [2024-12-06 13:28:47.636903] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:21:01.781 [2024-12-06 13:28:47.636973] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.781 [2024-12-06 13:28:47.735785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.781 [2024-12-06 13:28:47.787325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.781 [2024-12-06 13:28:47.787378] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.781 [2024-12-06 13:28:47.787387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.781 [2024-12-06 13:28:47.787395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.781 [2024-12-06 13:28:47.787402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.781 [2024-12-06 13:28:47.788207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.043 [2024-12-06 13:28:48.499485] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.043 Malloc0 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.043 [2024-12-06 13:28:48.553857] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2187213 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2187215 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2187216 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2187213 00:21:02.043 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.043 [2024-12-06 13:28:48.644400] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:02.043 [2024-12-06 13:28:48.654663] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:02.043 [2024-12-06 13:28:48.654968] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:03.428 Initializing NVMe Controllers 00:21:03.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:03.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:03.429 Initialization complete. Launching workers. 00:21:03.429 ======================================================== 00:21:03.429 Latency(us) 00:21:03.429 Device Information : IOPS MiB/s Average min max 00:21:03.429 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4064.00 15.87 245.77 151.30 567.63 00:21:03.429 ======================================================== 00:21:03.429 Total : 4064.00 15.87 245.77 151.30 567.63 00:21:03.429 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2187215 00:21:03.429 Initializing NVMe Controllers 00:21:03.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:03.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:03.429 Initialization complete. Launching workers. 00:21:03.429 ======================================================== 00:21:03.429 Latency(us) 00:21:03.429 Device Information : IOPS MiB/s Average min max 00:21:03.429 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1488.00 5.81 672.04 310.92 921.52 00:21:03.429 ======================================================== 00:21:03.429 Total : 1488.00 5.81 672.04 310.92 921.52 00:21:03.429 00:21:03.429 Initializing NVMe Controllers 00:21:03.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:03.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:03.429 Initialization complete. Launching workers. 00:21:03.429 ======================================================== 00:21:03.429 Latency(us) 00:21:03.429 Device Information : IOPS MiB/s Average min max 00:21:03.429 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40918.80 40821.19 41242.30 00:21:03.429 ======================================================== 00:21:03.429 Total : 25.00 0.10 40918.80 40821.19 41242.30 00:21:03.429 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2187216 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:03.429 rmmod nvme_tcp 00:21:03.429 rmmod nvme_fabrics 00:21:03.429 rmmod nvme_keyring 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2186874 ']' 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2186874 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2186874 ']' 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2186874 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.429 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2186874 00:21:03.429 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.429 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.429 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2186874' 00:21:03.429 killing process with pid 2186874 00:21:03.429 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2186874 00:21:03.429 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2186874 00:21:03.694 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.694 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.694 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.694 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:03.694 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:03.694 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.694 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.694 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.694 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:03.694 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.694 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.694 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:06.240 00:21:06.240 real 0m12.493s 00:21:06.240 user 0m7.962s 00:21:06.240 sys 0m6.730s 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:06.240 ************************************ 00:21:06.240 END TEST nvmf_control_msg_list 00:21:06.240 ************************************ 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:06.240 ************************************ 00:21:06.240 START TEST nvmf_wait_for_buf 00:21:06.240 ************************************ 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:06.240 * Looking for test storage... 00:21:06.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:06.240 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:06.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.241 --rc genhtml_branch_coverage=1 00:21:06.241 --rc genhtml_function_coverage=1 00:21:06.241 --rc genhtml_legend=1 00:21:06.241 --rc geninfo_all_blocks=1 00:21:06.241 --rc geninfo_unexecuted_blocks=1 00:21:06.241 00:21:06.241 ' 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:06.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.241 --rc genhtml_branch_coverage=1 00:21:06.241 --rc genhtml_function_coverage=1 00:21:06.241 --rc genhtml_legend=1 00:21:06.241 --rc geninfo_all_blocks=1 00:21:06.241 --rc geninfo_unexecuted_blocks=1 00:21:06.241 00:21:06.241 ' 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:06.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.241 --rc genhtml_branch_coverage=1 00:21:06.241 --rc genhtml_function_coverage=1 00:21:06.241 --rc genhtml_legend=1 00:21:06.241 --rc geninfo_all_blocks=1 00:21:06.241 --rc geninfo_unexecuted_blocks=1 00:21:06.241 00:21:06.241 ' 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:06.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.241 --rc genhtml_branch_coverage=1 00:21:06.241 --rc genhtml_function_coverage=1 00:21:06.241 --rc genhtml_legend=1 00:21:06.241 --rc geninfo_all_blocks=1 00:21:06.241 --rc geninfo_unexecuted_blocks=1 00:21:06.241 00:21:06.241 ' 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:06.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:06.241 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:14.377 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:14.378 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:14.378 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:14.378 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:14.378 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.378 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.378 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:14.378 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.378 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.378 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.378 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:14.378 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:14.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:21:14.378 00:21:14.378 --- 10.0.0.2 ping statistics --- 00:21:14.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.378 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:21:14.378 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:21:14.378 00:21:14.378 --- 10.0.0.1 ping statistics --- 00:21:14.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.378 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:21:14.378 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.378 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:14.378 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:14.378 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.378 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2191560 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2191560 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2191560 ']' 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.379 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.379 [2024-12-06 13:29:00.249603] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:21:14.379 [2024-12-06 13:29:00.249675] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.379 [2024-12-06 13:29:00.351831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.379 [2024-12-06 13:29:00.404076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.379 [2024-12-06 13:29:00.404127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.379 [2024-12-06 13:29:00.404137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.379 [2024-12-06 13:29:00.404145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.379 [2024-12-06 13:29:00.404152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.379 [2024-12-06 13:29:00.404909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.641 Malloc0 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.641 [2024-12-06 13:29:01.247484] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.641 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.642 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.642 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:14.642 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.642 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:14.642 [2024-12-06 13:29:01.283805] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.642 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.642 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:14.901 [2024-12-06 13:29:01.394992] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:16.291 Initializing NVMe Controllers 00:21:16.291 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:16.291 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:16.291 Initialization complete. Launching workers. 00:21:16.291 ======================================================== 00:21:16.291 Latency(us) 00:21:16.291 Device Information : IOPS MiB/s Average min max 00:21:16.291 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.98 8005.62 66834.90 00:21:16.291 ======================================================== 00:21:16.291 Total : 129.00 16.12 32294.98 8005.62 66834.90 00:21:16.291 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:16.291 rmmod nvme_tcp 00:21:16.291 rmmod nvme_fabrics 00:21:16.291 rmmod nvme_keyring 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2191560 ']' 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2191560 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2191560 ']' 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2191560 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.291 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2191560 00:21:16.553 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:16.553 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:16.553 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2191560' 00:21:16.553 killing process with pid 2191560 00:21:16.553 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2191560 00:21:16.553 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2191560 00:21:16.553 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:16.553 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:16.553 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:16.553 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:16.553 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:16.553 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:16.553 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:16.553 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:16.553 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:16.553 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.553 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.553 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.100 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:19.100 00:21:19.100 real 0m12.803s 00:21:19.100 user 0m5.182s 00:21:19.100 sys 0m6.219s 00:21:19.100 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.100 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.100 ************************************ 00:21:19.100 END TEST nvmf_wait_for_buf 00:21:19.100 ************************************ 00:21:19.100 13:29:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:19.100 13:29:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:19.100 13:29:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:19.100 13:29:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:19.100 13:29:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.100 13:29:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:25.831 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:25.831 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.831 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:25.832 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:25.832 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:25.832 ************************************ 00:21:25.832 START TEST nvmf_perf_adq 00:21:25.832 ************************************ 00:21:25.832 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:26.094 * Looking for test storage... 00:21:26.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:26.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.094 --rc genhtml_branch_coverage=1 00:21:26.094 --rc genhtml_function_coverage=1 00:21:26.094 --rc genhtml_legend=1 00:21:26.094 --rc geninfo_all_blocks=1 00:21:26.094 --rc geninfo_unexecuted_blocks=1 00:21:26.094 00:21:26.094 ' 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:26.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.094 --rc genhtml_branch_coverage=1 00:21:26.094 --rc genhtml_function_coverage=1 00:21:26.094 --rc genhtml_legend=1 00:21:26.094 --rc geninfo_all_blocks=1 00:21:26.094 --rc geninfo_unexecuted_blocks=1 00:21:26.094 00:21:26.094 ' 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:26.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.094 --rc genhtml_branch_coverage=1 00:21:26.094 --rc genhtml_function_coverage=1 00:21:26.094 --rc genhtml_legend=1 00:21:26.094 --rc geninfo_all_blocks=1 00:21:26.094 --rc geninfo_unexecuted_blocks=1 00:21:26.094 00:21:26.094 ' 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:26.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.094 --rc genhtml_branch_coverage=1 00:21:26.094 --rc genhtml_function_coverage=1 00:21:26.094 --rc genhtml_legend=1 00:21:26.094 --rc geninfo_all_blocks=1 00:21:26.094 --rc geninfo_unexecuted_blocks=1 00:21:26.094 00:21:26.094 ' 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.094 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:26.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:26.095 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:34.252 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:34.252 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:34.252 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.252 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.253 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.253 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.253 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:34.253 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:34.253 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.253 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:34.253 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.253 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:34.253 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:34.253 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:34.253 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:34.253 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:34.825 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:36.745 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:42.035 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:42.035 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:42.035 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:42.035 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.035 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:42.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:21:42.036 00:21:42.036 --- 10.0.0.2 ping statistics --- 00:21:42.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.036 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:21:42.036 00:21:42.036 --- 10.0.0.1 ping statistics --- 00:21:42.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.036 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2201803 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2201803 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2201803 ']' 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.036 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.297 [2024-12-06 13:29:28.732682] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:21:42.297 [2024-12-06 13:29:28.732748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.297 [2024-12-06 13:29:28.830724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.297 [2024-12-06 13:29:28.886553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.297 [2024-12-06 13:29:28.886603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.297 [2024-12-06 13:29:28.886616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.297 [2024-12-06 13:29:28.886624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.297 [2024-12-06 13:29:28.886630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.297 [2024-12-06 13:29:28.888602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.297 [2024-12-06 13:29:28.888778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.297 [2024-12-06 13:29:28.888974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.297 [2024-12-06 13:29:28.888974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.240 [2024-12-06 13:29:29.766846] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.240 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.241 Malloc1 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.241 [2024-12-06 13:29:29.849418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2202125 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:43.241 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:45.789 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:45.789 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.789 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:45.789 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.789 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:45.789 "tick_rate": 2400000000, 00:21:45.789 "poll_groups": [ 00:21:45.789 { 00:21:45.789 "name": "nvmf_tgt_poll_group_000", 00:21:45.789 "admin_qpairs": 1, 00:21:45.789 "io_qpairs": 1, 00:21:45.789 "current_admin_qpairs": 1, 00:21:45.789 "current_io_qpairs": 1, 00:21:45.789 "pending_bdev_io": 0, 00:21:45.789 "completed_nvme_io": 16436, 00:21:45.789 "transports": [ 00:21:45.789 { 00:21:45.789 "trtype": "TCP" 00:21:45.789 } 00:21:45.789 ] 00:21:45.789 }, 00:21:45.789 { 00:21:45.789 "name": "nvmf_tgt_poll_group_001", 00:21:45.789 "admin_qpairs": 0, 00:21:45.789 "io_qpairs": 1, 00:21:45.789 "current_admin_qpairs": 0, 00:21:45.789 "current_io_qpairs": 1, 00:21:45.789 "pending_bdev_io": 0, 00:21:45.789 "completed_nvme_io": 17765, 00:21:45.789 "transports": [ 00:21:45.789 { 00:21:45.789 "trtype": "TCP" 00:21:45.789 } 00:21:45.789 ] 00:21:45.789 }, 00:21:45.789 { 00:21:45.789 "name": "nvmf_tgt_poll_group_002", 00:21:45.789 "admin_qpairs": 0, 00:21:45.789 "io_qpairs": 1, 00:21:45.790 "current_admin_qpairs": 0, 00:21:45.790 "current_io_qpairs": 1, 00:21:45.790 "pending_bdev_io": 0, 00:21:45.790 "completed_nvme_io": 19084, 00:21:45.790 "transports": [ 00:21:45.790 { 00:21:45.790 "trtype": "TCP" 00:21:45.790 } 00:21:45.790 ] 00:21:45.790 }, 00:21:45.790 { 00:21:45.790 "name": "nvmf_tgt_poll_group_003", 00:21:45.790 "admin_qpairs": 0, 00:21:45.790 "io_qpairs": 1, 00:21:45.790 "current_admin_qpairs": 0, 00:21:45.790 "current_io_qpairs": 1, 00:21:45.790 "pending_bdev_io": 0, 00:21:45.790 "completed_nvme_io": 16664, 00:21:45.790 "transports": [ 00:21:45.790 { 00:21:45.790 "trtype": "TCP" 00:21:45.790 } 00:21:45.790 ] 00:21:45.790 } 00:21:45.790 ] 00:21:45.790 }' 00:21:45.790 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:45.790 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:45.790 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:45.790 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:45.790 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2202125 00:21:53.926 Initializing NVMe Controllers 00:21:53.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:53.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:53.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:53.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:53.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:53.926 Initialization complete. Launching workers. 00:21:53.926 ======================================================== 00:21:53.926 Latency(us) 00:21:53.926 Device Information : IOPS MiB/s Average min max 00:21:53.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12491.30 48.79 5124.84 1279.48 11273.14 00:21:53.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13598.00 53.12 4707.05 1322.53 12720.65 00:21:53.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13734.60 53.65 4658.93 1298.74 12659.30 00:21:53.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12971.10 50.67 4934.62 1255.47 12517.59 00:21:53.926 ======================================================== 00:21:53.926 Total : 52794.99 206.23 4849.29 1255.47 12720.65 00:21:53.926 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:53.926 rmmod nvme_tcp 00:21:53.926 rmmod nvme_fabrics 00:21:53.926 rmmod nvme_keyring 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2201803 ']' 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2201803 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2201803 ']' 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2201803 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2201803 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2201803' 00:21:53.926 killing process with pid 2201803 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2201803 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2201803 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.926 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.839 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:55.839 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:55.839 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:55.839 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:57.754 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:59.136 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:04.424 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:04.425 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:04.425 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:04.425 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:04.425 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:04.425 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.425 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:04.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:22:04.685 00:22:04.685 --- 10.0.0.2 ping statistics --- 00:22:04.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.685 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:22:04.685 00:22:04.685 --- 10.0.0.1 ping statistics --- 00:22:04.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.685 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:04.685 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:04.686 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:04.686 net.core.busy_poll = 1 00:22:04.686 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:04.686 net.core.busy_read = 1 00:22:04.686 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:04.686 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:04.686 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:04.686 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2206624 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2206624 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2206624 ']' 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.946 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.946 [2024-12-06 13:29:51.483864] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:22:04.946 [2024-12-06 13:29:51.483936] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.946 [2024-12-06 13:29:51.585373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.207 [2024-12-06 13:29:51.638171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.207 [2024-12-06 13:29:51.638224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.207 [2024-12-06 13:29:51.638234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.207 [2024-12-06 13:29:51.638241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.207 [2024-12-06 13:29:51.638248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.208 [2024-12-06 13:29:51.640364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.208 [2024-12-06 13:29:51.640526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.208 [2024-12-06 13:29:51.640617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.208 [2024-12-06 13:29:51.640618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.778 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.779 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:05.779 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.779 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.779 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.779 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:05.779 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.779 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.779 [2024-12-06 13:29:52.427736] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 Malloc1 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.039 [2024-12-06 13:29:52.499922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2206906 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:06.039 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:07.949 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:07.949 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.949 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.949 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.949 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:07.949 "tick_rate": 2400000000, 00:22:07.949 "poll_groups": [ 00:22:07.949 { 00:22:07.949 "name": "nvmf_tgt_poll_group_000", 00:22:07.949 "admin_qpairs": 1, 00:22:07.949 "io_qpairs": 1, 00:22:07.949 "current_admin_qpairs": 1, 00:22:07.949 "current_io_qpairs": 1, 00:22:07.949 "pending_bdev_io": 0, 00:22:07.949 "completed_nvme_io": 31788, 00:22:07.949 "transports": [ 00:22:07.949 { 00:22:07.949 "trtype": "TCP" 00:22:07.949 } 00:22:07.949 ] 00:22:07.949 }, 00:22:07.950 { 00:22:07.950 "name": "nvmf_tgt_poll_group_001", 00:22:07.950 "admin_qpairs": 0, 00:22:07.950 "io_qpairs": 3, 00:22:07.950 "current_admin_qpairs": 0, 00:22:07.950 "current_io_qpairs": 3, 00:22:07.950 "pending_bdev_io": 0, 00:22:07.950 "completed_nvme_io": 31249, 00:22:07.950 "transports": [ 00:22:07.950 { 00:22:07.950 "trtype": "TCP" 00:22:07.950 } 00:22:07.950 ] 00:22:07.950 }, 00:22:07.950 { 00:22:07.950 "name": "nvmf_tgt_poll_group_002", 00:22:07.950 "admin_qpairs": 0, 00:22:07.950 "io_qpairs": 0, 00:22:07.950 "current_admin_qpairs": 0, 00:22:07.950 "current_io_qpairs": 0, 00:22:07.950 "pending_bdev_io": 0, 00:22:07.950 "completed_nvme_io": 0, 00:22:07.950 "transports": [ 00:22:07.950 { 00:22:07.950 "trtype": "TCP" 00:22:07.950 } 00:22:07.950 ] 00:22:07.950 }, 00:22:07.950 { 00:22:07.950 "name": "nvmf_tgt_poll_group_003", 00:22:07.950 "admin_qpairs": 0, 00:22:07.950 "io_qpairs": 0, 00:22:07.950 "current_admin_qpairs": 0, 00:22:07.950 "current_io_qpairs": 0, 00:22:07.950 "pending_bdev_io": 0, 00:22:07.950 "completed_nvme_io": 0, 00:22:07.950 "transports": [ 00:22:07.950 { 00:22:07.950 "trtype": "TCP" 00:22:07.950 } 00:22:07.950 ] 00:22:07.950 } 00:22:07.950 ] 00:22:07.950 }' 00:22:07.950 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:07.950 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:07.950 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:07.950 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:07.950 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2206906 00:22:16.078 Initializing NVMe Controllers 00:22:16.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:16.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:16.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:16.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:16.078 Initialization complete. Launching workers. 00:22:16.078 ======================================================== 00:22:16.078 Latency(us) 00:22:16.078 Device Information : IOPS MiB/s Average min max 00:22:16.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 17630.00 68.87 3630.09 984.47 44701.72 00:22:16.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6845.40 26.74 9349.15 1222.52 55591.72 00:22:16.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6664.30 26.03 9623.38 1312.26 55653.89 00:22:16.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6749.70 26.37 9481.85 1081.29 57481.27 00:22:16.078 ======================================================== 00:22:16.078 Total : 37889.39 148.01 6759.93 984.47 57481.27 00:22:16.078 00:22:16.078 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:16.078 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:16.078 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:16.078 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:16.078 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:16.078 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:16.078 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:16.078 rmmod nvme_tcp 00:22:16.078 rmmod nvme_fabrics 00:22:16.339 rmmod nvme_keyring 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2206624 ']' 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2206624 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2206624 ']' 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2206624 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2206624 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2206624' 00:22:16.339 killing process with pid 2206624 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2206624 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2206624 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.339 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.636 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:19.636 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:19.636 00:22:19.636 real 0m53.630s 00:22:19.636 user 2m49.784s 00:22:19.636 sys 0m11.391s 00:22:19.636 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.636 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.636 ************************************ 00:22:19.636 END TEST nvmf_perf_adq 00:22:19.636 ************************************ 00:22:19.636 13:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:19.636 13:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:19.636 13:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.636 13:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:19.636 ************************************ 00:22:19.636 START TEST nvmf_shutdown 00:22:19.636 ************************************ 00:22:19.636 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:19.636 * Looking for test storage... 00:22:19.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:19.636 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:19.636 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:22:19.636 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:19.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.897 --rc genhtml_branch_coverage=1 00:22:19.897 --rc genhtml_function_coverage=1 00:22:19.897 --rc genhtml_legend=1 00:22:19.897 --rc geninfo_all_blocks=1 00:22:19.897 --rc geninfo_unexecuted_blocks=1 00:22:19.897 00:22:19.897 ' 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:19.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.897 --rc genhtml_branch_coverage=1 00:22:19.897 --rc genhtml_function_coverage=1 00:22:19.897 --rc genhtml_legend=1 00:22:19.897 --rc geninfo_all_blocks=1 00:22:19.897 --rc geninfo_unexecuted_blocks=1 00:22:19.897 00:22:19.897 ' 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:19.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.897 --rc genhtml_branch_coverage=1 00:22:19.897 --rc genhtml_function_coverage=1 00:22:19.897 --rc genhtml_legend=1 00:22:19.897 --rc geninfo_all_blocks=1 00:22:19.897 --rc geninfo_unexecuted_blocks=1 00:22:19.897 00:22:19.897 ' 00:22:19.897 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:19.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.897 --rc genhtml_branch_coverage=1 00:22:19.897 --rc genhtml_function_coverage=1 00:22:19.897 --rc genhtml_legend=1 00:22:19.898 --rc geninfo_all_blocks=1 00:22:19.898 --rc geninfo_unexecuted_blocks=1 00:22:19.898 00:22:19.898 ' 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:19.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:19.898 ************************************ 00:22:19.898 START TEST nvmf_shutdown_tc1 00:22:19.898 ************************************ 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:19.898 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.043 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.043 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.043 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:28.044 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:28.044 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:28.044 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:28.044 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.044 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:28.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:22:28.045 00:22:28.045 --- 10.0.0.2 ping statistics --- 00:22:28.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.045 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:22:28.045 00:22:28.045 --- 10.0.0.1 ping statistics --- 00:22:28.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.045 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:28.045 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:28.045 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:28.045 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:28.045 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.045 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.045 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2214018 00:22:28.045 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2214018 00:22:28.045 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:28.045 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2214018 ']' 00:22:28.045 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.045 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.045 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.045 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.045 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.045 [2024-12-06 13:30:14.097221] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:22:28.045 [2024-12-06 13:30:14.097291] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.045 [2024-12-06 13:30:14.188431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.045 [2024-12-06 13:30:14.248250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.045 [2024-12-06 13:30:14.248308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.045 [2024-12-06 13:30:14.248318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.045 [2024-12-06 13:30:14.248327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.045 [2024-12-06 13:30:14.248334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.045 [2024-12-06 13:30:14.250994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.045 [2024-12-06 13:30:14.251160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.045 [2024-12-06 13:30:14.251363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:28.045 [2024-12-06 13:30:14.251368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.639 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.639 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:28.639 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:28.639 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:28.639 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.639 [2024-12-06 13:30:15.037066] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.639 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.639 Malloc1 00:22:28.639 [2024-12-06 13:30:15.166420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.639 Malloc2 00:22:28.639 Malloc3 00:22:28.639 Malloc4 00:22:28.901 Malloc5 00:22:28.901 Malloc6 00:22:28.901 Malloc7 00:22:28.901 Malloc8 00:22:28.901 Malloc9 00:22:29.163 Malloc10 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2214388 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2214388 /var/tmp/bdevperf.sock 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2214388 ']' 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.163 { 00:22:29.163 "params": { 00:22:29.163 "name": "Nvme$subsystem", 00:22:29.163 "trtype": "$TEST_TRANSPORT", 00:22:29.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.163 "adrfam": "ipv4", 00:22:29.163 "trsvcid": "$NVMF_PORT", 00:22:29.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.163 "hdgst": ${hdgst:-false}, 00:22:29.163 "ddgst": ${ddgst:-false} 00:22:29.163 }, 00:22:29.163 "method": "bdev_nvme_attach_controller" 00:22:29.163 } 00:22:29.163 EOF 00:22:29.163 )") 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.163 { 00:22:29.163 "params": { 00:22:29.163 "name": "Nvme$subsystem", 00:22:29.163 "trtype": "$TEST_TRANSPORT", 00:22:29.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.163 "adrfam": "ipv4", 00:22:29.163 "trsvcid": "$NVMF_PORT", 00:22:29.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.163 "hdgst": ${hdgst:-false}, 00:22:29.163 "ddgst": ${ddgst:-false} 00:22:29.163 }, 00:22:29.163 "method": "bdev_nvme_attach_controller" 00:22:29.163 } 00:22:29.163 EOF 00:22:29.163 )") 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.163 { 00:22:29.163 "params": { 00:22:29.163 "name": "Nvme$subsystem", 00:22:29.163 "trtype": "$TEST_TRANSPORT", 00:22:29.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.163 "adrfam": "ipv4", 00:22:29.163 "trsvcid": "$NVMF_PORT", 00:22:29.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.163 "hdgst": ${hdgst:-false}, 00:22:29.163 "ddgst": ${ddgst:-false} 00:22:29.163 }, 00:22:29.163 "method": "bdev_nvme_attach_controller" 00:22:29.163 } 00:22:29.163 EOF 00:22:29.163 )") 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.163 { 00:22:29.163 "params": { 00:22:29.163 "name": "Nvme$subsystem", 00:22:29.163 "trtype": "$TEST_TRANSPORT", 00:22:29.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.163 "adrfam": "ipv4", 00:22:29.163 "trsvcid": "$NVMF_PORT", 00:22:29.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.163 "hdgst": ${hdgst:-false}, 00:22:29.163 "ddgst": ${ddgst:-false} 00:22:29.163 }, 00:22:29.163 "method": "bdev_nvme_attach_controller" 00:22:29.163 } 00:22:29.163 EOF 00:22:29.163 )") 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.163 { 00:22:29.163 "params": { 00:22:29.163 "name": "Nvme$subsystem", 00:22:29.163 "trtype": "$TEST_TRANSPORT", 00:22:29.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.163 "adrfam": "ipv4", 00:22:29.163 "trsvcid": "$NVMF_PORT", 00:22:29.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.163 "hdgst": ${hdgst:-false}, 00:22:29.163 "ddgst": ${ddgst:-false} 00:22:29.163 }, 00:22:29.163 "method": "bdev_nvme_attach_controller" 00:22:29.163 } 00:22:29.163 EOF 00:22:29.163 )") 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.163 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.163 { 00:22:29.163 "params": { 00:22:29.163 "name": "Nvme$subsystem", 00:22:29.163 "trtype": "$TEST_TRANSPORT", 00:22:29.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.163 "adrfam": "ipv4", 00:22:29.163 "trsvcid": "$NVMF_PORT", 00:22:29.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.163 "hdgst": ${hdgst:-false}, 00:22:29.163 "ddgst": ${ddgst:-false} 00:22:29.163 }, 00:22:29.164 "method": "bdev_nvme_attach_controller" 00:22:29.164 } 00:22:29.164 EOF 00:22:29.164 )") 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.164 [2024-12-06 13:30:15.687096] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:22:29.164 [2024-12-06 13:30:15.687166] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.164 { 00:22:29.164 "params": { 00:22:29.164 "name": "Nvme$subsystem", 00:22:29.164 "trtype": "$TEST_TRANSPORT", 00:22:29.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.164 "adrfam": "ipv4", 00:22:29.164 "trsvcid": "$NVMF_PORT", 00:22:29.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.164 "hdgst": ${hdgst:-false}, 00:22:29.164 "ddgst": ${ddgst:-false} 00:22:29.164 }, 00:22:29.164 "method": "bdev_nvme_attach_controller" 00:22:29.164 } 00:22:29.164 EOF 00:22:29.164 )") 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.164 { 00:22:29.164 "params": { 00:22:29.164 "name": "Nvme$subsystem", 00:22:29.164 "trtype": "$TEST_TRANSPORT", 00:22:29.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.164 "adrfam": "ipv4", 00:22:29.164 "trsvcid": "$NVMF_PORT", 00:22:29.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.164 "hdgst": ${hdgst:-false}, 00:22:29.164 "ddgst": ${ddgst:-false} 00:22:29.164 }, 00:22:29.164 "method": "bdev_nvme_attach_controller" 00:22:29.164 } 00:22:29.164 EOF 00:22:29.164 )") 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.164 { 00:22:29.164 "params": { 00:22:29.164 "name": "Nvme$subsystem", 00:22:29.164 "trtype": "$TEST_TRANSPORT", 00:22:29.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.164 "adrfam": "ipv4", 00:22:29.164 "trsvcid": "$NVMF_PORT", 00:22:29.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.164 "hdgst": ${hdgst:-false}, 00:22:29.164 "ddgst": ${ddgst:-false} 00:22:29.164 }, 00:22:29.164 "method": "bdev_nvme_attach_controller" 00:22:29.164 } 00:22:29.164 EOF 00:22:29.164 )") 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.164 { 00:22:29.164 "params": { 00:22:29.164 "name": "Nvme$subsystem", 00:22:29.164 "trtype": "$TEST_TRANSPORT", 00:22:29.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.164 "adrfam": "ipv4", 00:22:29.164 "trsvcid": "$NVMF_PORT", 00:22:29.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.164 "hdgst": ${hdgst:-false}, 00:22:29.164 "ddgst": ${ddgst:-false} 00:22:29.164 }, 00:22:29.164 "method": "bdev_nvme_attach_controller" 00:22:29.164 } 00:22:29.164 EOF 00:22:29.164 )") 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:29.164 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:29.164 "params": { 00:22:29.164 "name": "Nvme1", 00:22:29.164 "trtype": "tcp", 00:22:29.164 "traddr": "10.0.0.2", 00:22:29.164 "adrfam": "ipv4", 00:22:29.164 "trsvcid": "4420", 00:22:29.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.164 "hdgst": false, 00:22:29.164 "ddgst": false 00:22:29.164 }, 00:22:29.164 "method": "bdev_nvme_attach_controller" 00:22:29.164 },{ 00:22:29.164 "params": { 00:22:29.164 "name": "Nvme2", 00:22:29.164 "trtype": "tcp", 00:22:29.164 "traddr": "10.0.0.2", 00:22:29.164 "adrfam": "ipv4", 00:22:29.164 "trsvcid": "4420", 00:22:29.164 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:29.164 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:29.164 "hdgst": false, 00:22:29.164 "ddgst": false 00:22:29.164 }, 00:22:29.164 "method": "bdev_nvme_attach_controller" 00:22:29.164 },{ 00:22:29.164 "params": { 00:22:29.164 "name": "Nvme3", 00:22:29.164 "trtype": "tcp", 00:22:29.164 "traddr": "10.0.0.2", 00:22:29.164 "adrfam": "ipv4", 00:22:29.164 "trsvcid": "4420", 00:22:29.164 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:29.164 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:29.164 "hdgst": false, 00:22:29.164 "ddgst": false 00:22:29.164 }, 00:22:29.164 "method": "bdev_nvme_attach_controller" 00:22:29.164 },{ 00:22:29.164 "params": { 00:22:29.164 "name": "Nvme4", 00:22:29.164 "trtype": "tcp", 00:22:29.164 "traddr": "10.0.0.2", 00:22:29.164 "adrfam": "ipv4", 00:22:29.164 "trsvcid": "4420", 00:22:29.164 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:29.164 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:29.164 "hdgst": false, 00:22:29.164 "ddgst": false 00:22:29.164 }, 00:22:29.164 "method": "bdev_nvme_attach_controller" 00:22:29.164 },{ 00:22:29.164 "params": { 00:22:29.164 "name": "Nvme5", 00:22:29.164 "trtype": "tcp", 00:22:29.164 "traddr": "10.0.0.2", 00:22:29.164 "adrfam": "ipv4", 00:22:29.164 "trsvcid": "4420", 00:22:29.164 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:29.164 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:29.164 "hdgst": false, 00:22:29.164 "ddgst": false 00:22:29.164 }, 00:22:29.164 "method": "bdev_nvme_attach_controller" 00:22:29.164 },{ 00:22:29.164 "params": { 00:22:29.165 "name": "Nvme6", 00:22:29.165 "trtype": "tcp", 00:22:29.165 "traddr": "10.0.0.2", 00:22:29.165 "adrfam": "ipv4", 00:22:29.165 "trsvcid": "4420", 00:22:29.165 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:29.165 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:29.165 "hdgst": false, 00:22:29.165 "ddgst": false 00:22:29.165 }, 00:22:29.165 "method": "bdev_nvme_attach_controller" 00:22:29.165 },{ 00:22:29.165 "params": { 00:22:29.165 "name": "Nvme7", 00:22:29.165 "trtype": "tcp", 00:22:29.165 "traddr": "10.0.0.2", 00:22:29.165 "adrfam": "ipv4", 00:22:29.165 "trsvcid": "4420", 00:22:29.165 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:29.165 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:29.165 "hdgst": false, 00:22:29.165 "ddgst": false 00:22:29.165 }, 00:22:29.165 "method": "bdev_nvme_attach_controller" 00:22:29.165 },{ 00:22:29.165 "params": { 00:22:29.165 "name": "Nvme8", 00:22:29.165 "trtype": "tcp", 00:22:29.165 "traddr": "10.0.0.2", 00:22:29.165 "adrfam": "ipv4", 00:22:29.165 "trsvcid": "4420", 00:22:29.165 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:29.165 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:29.165 "hdgst": false, 00:22:29.165 "ddgst": false 00:22:29.165 }, 00:22:29.165 "method": "bdev_nvme_attach_controller" 00:22:29.165 },{ 00:22:29.165 "params": { 00:22:29.165 "name": "Nvme9", 00:22:29.165 "trtype": "tcp", 00:22:29.165 "traddr": "10.0.0.2", 00:22:29.165 "adrfam": "ipv4", 00:22:29.165 "trsvcid": "4420", 00:22:29.165 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:29.165 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:29.165 "hdgst": false, 00:22:29.165 "ddgst": false 00:22:29.165 }, 00:22:29.165 "method": "bdev_nvme_attach_controller" 00:22:29.165 },{ 00:22:29.165 "params": { 00:22:29.165 "name": "Nvme10", 00:22:29.165 "trtype": "tcp", 00:22:29.165 "traddr": "10.0.0.2", 00:22:29.165 "adrfam": "ipv4", 00:22:29.165 "trsvcid": "4420", 00:22:29.165 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:29.165 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:29.165 "hdgst": false, 00:22:29.165 "ddgst": false 00:22:29.165 }, 00:22:29.165 "method": "bdev_nvme_attach_controller" 00:22:29.165 }' 00:22:29.165 [2024-12-06 13:30:15.784673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.425 [2024-12-06 13:30:15.838465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.811 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.811 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:30.811 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:30.811 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.811 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:30.811 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.811 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2214388 00:22:30.811 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:30.811 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:31.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2214388 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:31.753 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2214018 00:22:31.753 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:31.753 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:31.753 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:31.753 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:31.753 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:31.753 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:31.753 { 00:22:31.753 "params": { 00:22:31.753 "name": "Nvme$subsystem", 00:22:31.753 "trtype": "$TEST_TRANSPORT", 00:22:31.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.753 "adrfam": "ipv4", 00:22:31.753 "trsvcid": "$NVMF_PORT", 00:22:31.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.753 "hdgst": ${hdgst:-false}, 00:22:31.753 "ddgst": ${ddgst:-false} 00:22:31.753 }, 00:22:31.753 "method": "bdev_nvme_attach_controller" 00:22:31.753 } 00:22:31.753 EOF 00:22:31.753 )") 00:22:31.753 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:31.753 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:31.753 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:31.753 { 00:22:31.753 "params": { 00:22:31.753 "name": "Nvme$subsystem", 00:22:31.753 "trtype": "$TEST_TRANSPORT", 00:22:31.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.753 "adrfam": "ipv4", 00:22:31.753 "trsvcid": "$NVMF_PORT", 00:22:31.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.753 "hdgst": ${hdgst:-false}, 00:22:31.753 "ddgst": ${ddgst:-false} 00:22:31.753 }, 00:22:31.753 "method": "bdev_nvme_attach_controller" 00:22:31.753 } 00:22:31.753 EOF 00:22:31.753 )") 00:22:31.753 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:31.754 { 00:22:31.754 "params": { 00:22:31.754 "name": "Nvme$subsystem", 00:22:31.754 "trtype": "$TEST_TRANSPORT", 00:22:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.754 "adrfam": "ipv4", 00:22:31.754 "trsvcid": "$NVMF_PORT", 00:22:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.754 "hdgst": ${hdgst:-false}, 00:22:31.754 "ddgst": ${ddgst:-false} 00:22:31.754 }, 00:22:31.754 "method": "bdev_nvme_attach_controller" 00:22:31.754 } 00:22:31.754 EOF 00:22:31.754 )") 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:31.754 { 00:22:31.754 "params": { 00:22:31.754 "name": "Nvme$subsystem", 00:22:31.754 "trtype": "$TEST_TRANSPORT", 00:22:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.754 "adrfam": "ipv4", 00:22:31.754 "trsvcid": "$NVMF_PORT", 00:22:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.754 "hdgst": ${hdgst:-false}, 00:22:31.754 "ddgst": ${ddgst:-false} 00:22:31.754 }, 00:22:31.754 "method": "bdev_nvme_attach_controller" 00:22:31.754 } 00:22:31.754 EOF 00:22:31.754 )") 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:31.754 { 00:22:31.754 "params": { 00:22:31.754 "name": "Nvme$subsystem", 00:22:31.754 "trtype": "$TEST_TRANSPORT", 00:22:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.754 "adrfam": "ipv4", 00:22:31.754 "trsvcid": "$NVMF_PORT", 00:22:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.754 "hdgst": ${hdgst:-false}, 00:22:31.754 "ddgst": ${ddgst:-false} 00:22:31.754 }, 00:22:31.754 "method": "bdev_nvme_attach_controller" 00:22:31.754 } 00:22:31.754 EOF 00:22:31.754 )") 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:31.754 { 00:22:31.754 "params": { 00:22:31.754 "name": "Nvme$subsystem", 00:22:31.754 "trtype": "$TEST_TRANSPORT", 00:22:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.754 "adrfam": "ipv4", 00:22:31.754 "trsvcid": "$NVMF_PORT", 00:22:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.754 "hdgst": ${hdgst:-false}, 00:22:31.754 "ddgst": ${ddgst:-false} 00:22:31.754 }, 00:22:31.754 "method": "bdev_nvme_attach_controller" 00:22:31.754 } 00:22:31.754 EOF 00:22:31.754 )") 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:31.754 [2024-12-06 13:30:18.269040] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:22:31.754 [2024-12-06 13:30:18.269096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2214787 ] 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:31.754 { 00:22:31.754 "params": { 00:22:31.754 "name": "Nvme$subsystem", 00:22:31.754 "trtype": "$TEST_TRANSPORT", 00:22:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.754 "adrfam": "ipv4", 00:22:31.754 "trsvcid": "$NVMF_PORT", 00:22:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.754 "hdgst": ${hdgst:-false}, 00:22:31.754 "ddgst": ${ddgst:-false} 00:22:31.754 }, 00:22:31.754 "method": "bdev_nvme_attach_controller" 00:22:31.754 } 00:22:31.754 EOF 00:22:31.754 )") 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:31.754 { 00:22:31.754 "params": { 00:22:31.754 "name": "Nvme$subsystem", 00:22:31.754 "trtype": "$TEST_TRANSPORT", 00:22:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.754 "adrfam": "ipv4", 00:22:31.754 "trsvcid": "$NVMF_PORT", 00:22:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.754 "hdgst": ${hdgst:-false}, 00:22:31.754 "ddgst": ${ddgst:-false} 00:22:31.754 }, 00:22:31.754 "method": "bdev_nvme_attach_controller" 00:22:31.754 } 00:22:31.754 EOF 00:22:31.754 )") 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:31.754 { 00:22:31.754 "params": { 00:22:31.754 "name": "Nvme$subsystem", 00:22:31.754 "trtype": "$TEST_TRANSPORT", 00:22:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.754 "adrfam": "ipv4", 00:22:31.754 "trsvcid": "$NVMF_PORT", 00:22:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.754 "hdgst": ${hdgst:-false}, 00:22:31.754 "ddgst": ${ddgst:-false} 00:22:31.754 }, 00:22:31.754 "method": "bdev_nvme_attach_controller" 00:22:31.754 } 00:22:31.754 EOF 00:22:31.754 )") 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:31.754 { 00:22:31.754 "params": { 00:22:31.754 "name": "Nvme$subsystem", 00:22:31.754 "trtype": "$TEST_TRANSPORT", 00:22:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.754 "adrfam": "ipv4", 00:22:31.754 "trsvcid": "$NVMF_PORT", 00:22:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.754 "hdgst": ${hdgst:-false}, 00:22:31.754 "ddgst": ${ddgst:-false} 00:22:31.754 }, 00:22:31.754 "method": "bdev_nvme_attach_controller" 00:22:31.754 } 00:22:31.754 EOF 00:22:31.754 )") 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:31.754 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:31.754 "params": { 00:22:31.754 "name": "Nvme1", 00:22:31.754 "trtype": "tcp", 00:22:31.754 "traddr": "10.0.0.2", 00:22:31.754 "adrfam": "ipv4", 00:22:31.754 "trsvcid": "4420", 00:22:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.754 "hdgst": false, 00:22:31.754 "ddgst": false 00:22:31.754 }, 00:22:31.754 "method": "bdev_nvme_attach_controller" 00:22:31.754 },{ 00:22:31.754 "params": { 00:22:31.754 "name": "Nvme2", 00:22:31.754 "trtype": "tcp", 00:22:31.754 "traddr": "10.0.0.2", 00:22:31.754 "adrfam": "ipv4", 00:22:31.754 "trsvcid": "4420", 00:22:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:31.754 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:31.754 "hdgst": false, 00:22:31.754 "ddgst": false 00:22:31.754 }, 00:22:31.754 "method": "bdev_nvme_attach_controller" 00:22:31.754 },{ 00:22:31.754 "params": { 00:22:31.754 "name": "Nvme3", 00:22:31.754 "trtype": "tcp", 00:22:31.754 "traddr": "10.0.0.2", 00:22:31.754 "adrfam": "ipv4", 00:22:31.754 "trsvcid": "4420", 00:22:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:31.754 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:31.754 "hdgst": false, 00:22:31.754 "ddgst": false 00:22:31.754 }, 00:22:31.754 "method": "bdev_nvme_attach_controller" 00:22:31.754 },{ 00:22:31.754 "params": { 00:22:31.754 "name": "Nvme4", 00:22:31.754 "trtype": "tcp", 00:22:31.754 "traddr": "10.0.0.2", 00:22:31.754 "adrfam": "ipv4", 00:22:31.754 "trsvcid": "4420", 00:22:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:31.754 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:31.754 "hdgst": false, 00:22:31.754 "ddgst": false 00:22:31.754 }, 00:22:31.754 "method": "bdev_nvme_attach_controller" 00:22:31.754 },{ 00:22:31.754 "params": { 00:22:31.755 "name": "Nvme5", 00:22:31.755 "trtype": "tcp", 00:22:31.755 "traddr": "10.0.0.2", 00:22:31.755 "adrfam": "ipv4", 00:22:31.755 "trsvcid": "4420", 00:22:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:31.755 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:31.755 "hdgst": false, 00:22:31.755 "ddgst": false 00:22:31.755 }, 00:22:31.755 "method": "bdev_nvme_attach_controller" 00:22:31.755 },{ 00:22:31.755 "params": { 00:22:31.755 "name": "Nvme6", 00:22:31.755 "trtype": "tcp", 00:22:31.755 "traddr": "10.0.0.2", 00:22:31.755 "adrfam": "ipv4", 00:22:31.755 "trsvcid": "4420", 00:22:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:31.755 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:31.755 "hdgst": false, 00:22:31.755 "ddgst": false 00:22:31.755 }, 00:22:31.755 "method": "bdev_nvme_attach_controller" 00:22:31.755 },{ 00:22:31.755 "params": { 00:22:31.755 "name": "Nvme7", 00:22:31.755 "trtype": "tcp", 00:22:31.755 "traddr": "10.0.0.2", 00:22:31.755 "adrfam": "ipv4", 00:22:31.755 "trsvcid": "4420", 00:22:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:31.755 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:31.755 "hdgst": false, 00:22:31.755 "ddgst": false 00:22:31.755 }, 00:22:31.755 "method": "bdev_nvme_attach_controller" 00:22:31.755 },{ 00:22:31.755 "params": { 00:22:31.755 "name": "Nvme8", 00:22:31.755 "trtype": "tcp", 00:22:31.755 "traddr": "10.0.0.2", 00:22:31.755 "adrfam": "ipv4", 00:22:31.755 "trsvcid": "4420", 00:22:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:31.755 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:31.755 "hdgst": false, 00:22:31.755 "ddgst": false 00:22:31.755 }, 00:22:31.755 "method": "bdev_nvme_attach_controller" 00:22:31.755 },{ 00:22:31.755 "params": { 00:22:31.755 "name": "Nvme9", 00:22:31.755 "trtype": "tcp", 00:22:31.755 "traddr": "10.0.0.2", 00:22:31.755 "adrfam": "ipv4", 00:22:31.755 "trsvcid": "4420", 00:22:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:31.755 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:31.755 "hdgst": false, 00:22:31.755 "ddgst": false 00:22:31.755 }, 00:22:31.755 "method": "bdev_nvme_attach_controller" 00:22:31.755 },{ 00:22:31.755 "params": { 00:22:31.755 "name": "Nvme10", 00:22:31.755 "trtype": "tcp", 00:22:31.755 "traddr": "10.0.0.2", 00:22:31.755 "adrfam": "ipv4", 00:22:31.755 "trsvcid": "4420", 00:22:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:31.755 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:31.755 "hdgst": false, 00:22:31.755 "ddgst": false 00:22:31.755 }, 00:22:31.755 "method": "bdev_nvme_attach_controller" 00:22:31.755 }' 00:22:31.755 [2024-12-06 13:30:18.358855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.755 [2024-12-06 13:30:18.394480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.142 Running I/O for 1 seconds... 00:22:34.445 1855.00 IOPS, 115.94 MiB/s 00:22:34.445 Latency(us) 00:22:34.445 [2024-12-06T12:30:21.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.445 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.445 Verification LBA range: start 0x0 length 0x400 00:22:34.445 Nvme1n1 : 1.10 232.93 14.56 0.00 0.00 271771.52 17694.72 251658.24 00:22:34.445 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.445 Verification LBA range: start 0x0 length 0x400 00:22:34.445 Nvme2n1 : 1.14 225.35 14.08 0.00 0.00 276298.03 17148.59 232434.35 00:22:34.445 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.445 Verification LBA range: start 0x0 length 0x400 00:22:34.445 Nvme3n1 : 1.07 240.20 15.01 0.00 0.00 254082.56 18240.85 253405.87 00:22:34.445 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.445 Verification LBA range: start 0x0 length 0x400 00:22:34.445 Nvme4n1 : 1.14 224.38 14.02 0.00 0.00 267912.32 16493.23 265639.25 00:22:34.445 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.445 Verification LBA range: start 0x0 length 0x400 00:22:34.445 Nvme5n1 : 1.13 230.01 14.38 0.00 0.00 256568.40 20753.07 267386.88 00:22:34.445 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.445 Verification LBA range: start 0x0 length 0x400 00:22:34.445 Nvme6n1 : 1.17 219.23 13.70 0.00 0.00 265397.33 27088.21 267386.88 00:22:34.445 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.445 Verification LBA range: start 0x0 length 0x400 00:22:34.445 Nvme7n1 : 1.18 272.15 17.01 0.00 0.00 209824.94 16493.23 242920.11 00:22:34.445 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.445 Verification LBA range: start 0x0 length 0x400 00:22:34.445 Nvme8n1 : 1.18 270.95 16.93 0.00 0.00 206508.63 13489.49 232434.35 00:22:34.445 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.445 Verification LBA range: start 0x0 length 0x400 00:22:34.445 Nvme9n1 : 1.19 268.66 16.79 0.00 0.00 205533.35 8082.77 253405.87 00:22:34.445 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.445 Verification LBA range: start 0x0 length 0x400 00:22:34.445 Nvme10n1 : 1.17 217.91 13.62 0.00 0.00 248052.27 15728.64 274377.39 00:22:34.445 [2024-12-06T12:30:21.104Z] =================================================================================================================== 00:22:34.445 [2024-12-06T12:30:21.104Z] Total : 2401.78 150.11 0.00 0.00 243499.56 8082.77 274377.39 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.445 rmmod nvme_tcp 00:22:34.445 rmmod nvme_fabrics 00:22:34.445 rmmod nvme_keyring 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2214018 ']' 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2214018 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2214018 ']' 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2214018 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:34.445 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.721 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2214018 00:22:34.721 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:34.721 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:34.721 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2214018' 00:22:34.721 killing process with pid 2214018 00:22:34.721 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2214018 00:22:34.721 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2214018 00:22:34.981 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.981 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.981 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.981 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:34.981 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:34.981 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.981 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.981 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.981 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.981 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.981 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.981 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.895 00:22:36.895 real 0m17.036s 00:22:36.895 user 0m34.720s 00:22:36.895 sys 0m6.921s 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:36.895 ************************************ 00:22:36.895 END TEST nvmf_shutdown_tc1 00:22:36.895 ************************************ 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:36.895 ************************************ 00:22:36.895 START TEST nvmf_shutdown_tc2 00:22:36.895 ************************************ 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.895 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:37.157 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:37.157 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:37.157 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.157 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:37.158 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:37.158 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.419 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.419 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.419 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:37.419 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:37.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:22:37.419 00:22:37.419 --- 10.0.0.2 ping statistics --- 00:22:37.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.419 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:22:37.419 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:22:37.420 00:22:37.420 --- 10.0.0.1 ping statistics --- 00:22:37.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.420 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2215971 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2215971 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2215971 ']' 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:37.420 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:37.420 [2024-12-06 13:30:23.978824] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:22:37.420 [2024-12-06 13:30:23.978890] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.420 [2024-12-06 13:30:24.073095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.681 [2024-12-06 13:30:24.107820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.681 [2024-12-06 13:30:24.107849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.681 [2024-12-06 13:30:24.107855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.681 [2024-12-06 13:30:24.107863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.681 [2024-12-06 13:30:24.107868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.681 [2024-12-06 13:30:24.109308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.681 [2024-12-06 13:30:24.109566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.681 [2024-12-06 13:30:24.109566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:37.681 [2024-12-06 13:30:24.109350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.254 [2024-12-06 13:30:24.825245] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.254 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.514 Malloc1 00:22:38.514 [2024-12-06 13:30:24.933224] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.514 Malloc2 00:22:38.514 Malloc3 00:22:38.514 Malloc4 00:22:38.514 Malloc5 00:22:38.514 Malloc6 00:22:38.514 Malloc7 00:22:38.774 Malloc8 00:22:38.774 Malloc9 00:22:38.774 Malloc10 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2216294 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2216294 /var/tmp/bdevperf.sock 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2216294 ']' 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.774 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.774 { 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme$subsystem", 00:22:38.775 "trtype": "$TEST_TRANSPORT", 00:22:38.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "$NVMF_PORT", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.775 "hdgst": ${hdgst:-false}, 00:22:38.775 "ddgst": ${ddgst:-false} 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 } 00:22:38.775 EOF 00:22:38.775 )") 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.775 { 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme$subsystem", 00:22:38.775 "trtype": "$TEST_TRANSPORT", 00:22:38.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "$NVMF_PORT", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.775 "hdgst": ${hdgst:-false}, 00:22:38.775 "ddgst": ${ddgst:-false} 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 } 00:22:38.775 EOF 00:22:38.775 )") 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.775 { 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme$subsystem", 00:22:38.775 "trtype": "$TEST_TRANSPORT", 00:22:38.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "$NVMF_PORT", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.775 "hdgst": ${hdgst:-false}, 00:22:38.775 "ddgst": ${ddgst:-false} 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 } 00:22:38.775 EOF 00:22:38.775 )") 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.775 { 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme$subsystem", 00:22:38.775 "trtype": "$TEST_TRANSPORT", 00:22:38.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "$NVMF_PORT", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.775 "hdgst": ${hdgst:-false}, 00:22:38.775 "ddgst": ${ddgst:-false} 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 } 00:22:38.775 EOF 00:22:38.775 )") 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.775 { 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme$subsystem", 00:22:38.775 "trtype": "$TEST_TRANSPORT", 00:22:38.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "$NVMF_PORT", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.775 "hdgst": ${hdgst:-false}, 00:22:38.775 "ddgst": ${ddgst:-false} 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 } 00:22:38.775 EOF 00:22:38.775 )") 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.775 { 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme$subsystem", 00:22:38.775 "trtype": "$TEST_TRANSPORT", 00:22:38.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "$NVMF_PORT", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.775 "hdgst": ${hdgst:-false}, 00:22:38.775 "ddgst": ${ddgst:-false} 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 } 00:22:38.775 EOF 00:22:38.775 )") 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.775 { 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme$subsystem", 00:22:38.775 "trtype": "$TEST_TRANSPORT", 00:22:38.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "$NVMF_PORT", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.775 "hdgst": ${hdgst:-false}, 00:22:38.775 "ddgst": ${ddgst:-false} 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 } 00:22:38.775 EOF 00:22:38.775 )") 00:22:38.775 [2024-12-06 13:30:25.378416] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:22:38.775 [2024-12-06 13:30:25.378473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216294 ] 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.775 { 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme$subsystem", 00:22:38.775 "trtype": "$TEST_TRANSPORT", 00:22:38.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "$NVMF_PORT", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.775 "hdgst": ${hdgst:-false}, 00:22:38.775 "ddgst": ${ddgst:-false} 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 } 00:22:38.775 EOF 00:22:38.775 )") 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.775 { 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme$subsystem", 00:22:38.775 "trtype": "$TEST_TRANSPORT", 00:22:38.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "$NVMF_PORT", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.775 "hdgst": ${hdgst:-false}, 00:22:38.775 "ddgst": ${ddgst:-false} 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 } 00:22:38.775 EOF 00:22:38.775 )") 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.775 { 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme$subsystem", 00:22:38.775 "trtype": "$TEST_TRANSPORT", 00:22:38.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "$NVMF_PORT", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.775 "hdgst": ${hdgst:-false}, 00:22:38.775 "ddgst": ${ddgst:-false} 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 } 00:22:38.775 EOF 00:22:38.775 )") 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:38.775 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme1", 00:22:38.775 "trtype": "tcp", 00:22:38.775 "traddr": "10.0.0.2", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "4420", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.775 "hdgst": false, 00:22:38.775 "ddgst": false 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 },{ 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme2", 00:22:38.775 "trtype": "tcp", 00:22:38.775 "traddr": "10.0.0.2", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "4420", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:38.775 "hdgst": false, 00:22:38.775 "ddgst": false 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 },{ 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme3", 00:22:38.775 "trtype": "tcp", 00:22:38.775 "traddr": "10.0.0.2", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "4420", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:38.775 "hdgst": false, 00:22:38.775 "ddgst": false 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 },{ 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme4", 00:22:38.775 "trtype": "tcp", 00:22:38.775 "traddr": "10.0.0.2", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "4420", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:38.775 "hdgst": false, 00:22:38.775 "ddgst": false 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 },{ 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme5", 00:22:38.775 "trtype": "tcp", 00:22:38.775 "traddr": "10.0.0.2", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "4420", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:38.775 "hdgst": false, 00:22:38.775 "ddgst": false 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 },{ 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme6", 00:22:38.775 "trtype": "tcp", 00:22:38.775 "traddr": "10.0.0.2", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "4420", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:38.775 "hdgst": false, 00:22:38.775 "ddgst": false 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 },{ 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme7", 00:22:38.775 "trtype": "tcp", 00:22:38.775 "traddr": "10.0.0.2", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "4420", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:38.775 "hdgst": false, 00:22:38.775 "ddgst": false 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 },{ 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme8", 00:22:38.775 "trtype": "tcp", 00:22:38.775 "traddr": "10.0.0.2", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "4420", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:38.775 "hdgst": false, 00:22:38.775 "ddgst": false 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 },{ 00:22:38.775 "params": { 00:22:38.775 "name": "Nvme9", 00:22:38.775 "trtype": "tcp", 00:22:38.775 "traddr": "10.0.0.2", 00:22:38.775 "adrfam": "ipv4", 00:22:38.775 "trsvcid": "4420", 00:22:38.775 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:38.775 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:38.775 "hdgst": false, 00:22:38.775 "ddgst": false 00:22:38.775 }, 00:22:38.775 "method": "bdev_nvme_attach_controller" 00:22:38.775 },{ 00:22:38.776 "params": { 00:22:38.776 "name": "Nvme10", 00:22:38.776 "trtype": "tcp", 00:22:38.776 "traddr": "10.0.0.2", 00:22:38.776 "adrfam": "ipv4", 00:22:38.776 "trsvcid": "4420", 00:22:38.776 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:38.776 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:38.776 "hdgst": false, 00:22:38.776 "ddgst": false 00:22:38.776 }, 00:22:38.776 "method": "bdev_nvme_attach_controller" 00:22:38.776 }' 00:22:39.034 [2024-12-06 13:30:25.468524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.034 [2024-12-06 13:30:25.504843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.413 Running I/O for 10 seconds... 00:22:40.413 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.413 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:40.413 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:40.413 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.413 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:40.673 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:40.933 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:40.933 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:40.933 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:40.933 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:40.933 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.933 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.933 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.933 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:40.933 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:40.933 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2216294 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2216294 ']' 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2216294 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2216294 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2216294' 00:22:41.193 killing process with pid 2216294 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2216294 00:22:41.193 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2216294 00:22:41.454 Received shutdown signal, test time was about 0.997732 seconds 00:22:41.454 00:22:41.454 Latency(us) 00:22:41.454 [2024-12-06T12:30:28.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.454 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.454 Verification LBA range: start 0x0 length 0x400 00:22:41.454 Nvme1n1 : 0.96 266.21 16.64 0.00 0.00 237552.00 18786.99 262144.00 00:22:41.454 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.454 Verification LBA range: start 0x0 length 0x400 00:22:41.454 Nvme2n1 : 0.97 265.15 16.57 0.00 0.00 233746.77 14527.15 246415.36 00:22:41.454 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.454 Verification LBA range: start 0x0 length 0x400 00:22:41.454 Nvme3n1 : 0.96 267.39 16.71 0.00 0.00 227058.35 31457.28 246415.36 00:22:41.454 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.454 Verification LBA range: start 0x0 length 0x400 00:22:41.454 Nvme4n1 : 1.00 260.82 16.30 0.00 0.00 218622.32 8574.29 225443.84 00:22:41.454 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.454 Verification LBA range: start 0x0 length 0x400 00:22:41.454 Nvme5n1 : 0.95 268.15 16.76 0.00 0.00 216707.84 16384.00 246415.36 00:22:41.454 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.454 Verification LBA range: start 0x0 length 0x400 00:22:41.454 Nvme6n1 : 0.94 203.50 12.72 0.00 0.00 278969.46 20097.71 249910.61 00:22:41.454 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.454 Verification LBA range: start 0x0 length 0x400 00:22:41.454 Nvme7n1 : 0.95 274.31 17.14 0.00 0.00 201980.27 2375.68 248162.99 00:22:41.454 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.454 Verification LBA range: start 0x0 length 0x400 00:22:41.454 Nvme8n1 : 0.97 202.29 12.64 0.00 0.00 255945.25 3263.15 248162.99 00:22:41.454 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.454 Verification LBA range: start 0x0 length 0x400 00:22:41.454 Nvme9n1 : 0.94 204.25 12.77 0.00 0.00 258977.85 19442.35 241172.48 00:22:41.454 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.454 Verification LBA range: start 0x0 length 0x400 00:22:41.454 Nvme10n1 : 0.95 202.14 12.63 0.00 0.00 256159.29 20753.07 262144.00 00:22:41.454 [2024-12-06T12:30:28.113Z] =================================================================================================================== 00:22:41.454 [2024-12-06T12:30:28.113Z] Total : 2414.20 150.89 0.00 0.00 235866.63 2375.68 262144.00 00:22:41.454 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:42.398 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2215971 00:22:42.398 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:42.398 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:42.398 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:42.398 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:42.659 rmmod nvme_tcp 00:22:42.659 rmmod nvme_fabrics 00:22:42.659 rmmod nvme_keyring 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2215971 ']' 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2215971 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2215971 ']' 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2215971 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2215971 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2215971' 00:22:42.659 killing process with pid 2215971 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2215971 00:22:42.659 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2215971 00:22:42.921 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:42.921 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:42.921 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:42.921 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:42.921 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:42.921 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:42.921 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:42.921 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:42.921 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:42.921 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.921 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.921 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.837 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:45.097 00:22:45.097 real 0m7.946s 00:22:45.097 user 0m24.096s 00:22:45.097 sys 0m1.307s 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.097 ************************************ 00:22:45.097 END TEST nvmf_shutdown_tc2 00:22:45.097 ************************************ 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:45.097 ************************************ 00:22:45.097 START TEST nvmf_shutdown_tc3 00:22:45.097 ************************************ 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.097 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:45.098 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:45.098 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:45.098 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:45.098 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.098 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:45.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:22:45.359 00:22:45.359 --- 10.0.0.2 ping statistics --- 00:22:45.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.359 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:22:45.359 00:22:45.359 --- 10.0.0.1 ping statistics --- 00:22:45.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.359 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2217753 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2217753 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2217753 ']' 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.359 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:45.359 [2024-12-06 13:30:31.996732] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:22:45.359 [2024-12-06 13:30:31.996797] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.619 [2024-12-06 13:30:32.091277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.619 [2024-12-06 13:30:32.130259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.619 [2024-12-06 13:30:32.130297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.619 [2024-12-06 13:30:32.130302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.619 [2024-12-06 13:30:32.130311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.619 [2024-12-06 13:30:32.130315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.619 [2024-12-06 13:30:32.132102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.619 [2024-12-06 13:30:32.132261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.619 [2024-12-06 13:30:32.132414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.619 [2024-12-06 13:30:32.132416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:46.189 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.189 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:46.189 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:46.189 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.189 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.450 [2024-12-06 13:30:32.853468] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.450 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.450 Malloc1 00:22:46.450 [2024-12-06 13:30:32.964310] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.450 Malloc2 00:22:46.450 Malloc3 00:22:46.450 Malloc4 00:22:46.450 Malloc5 00:22:46.710 Malloc6 00:22:46.710 Malloc7 00:22:46.710 Malloc8 00:22:46.710 Malloc9 00:22:46.710 Malloc10 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2218134 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2218134 /var/tmp/bdevperf.sock 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2218134 ']' 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:46.710 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:46.710 { 00:22:46.711 "params": { 00:22:46.711 "name": "Nvme$subsystem", 00:22:46.711 "trtype": "$TEST_TRANSPORT", 00:22:46.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.711 "adrfam": "ipv4", 00:22:46.711 "trsvcid": "$NVMF_PORT", 00:22:46.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.711 "hdgst": ${hdgst:-false}, 00:22:46.711 "ddgst": ${ddgst:-false} 00:22:46.711 }, 00:22:46.711 "method": "bdev_nvme_attach_controller" 00:22:46.711 } 00:22:46.711 EOF 00:22:46.711 )") 00:22:46.711 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:46.971 { 00:22:46.971 "params": { 00:22:46.971 "name": "Nvme$subsystem", 00:22:46.971 "trtype": "$TEST_TRANSPORT", 00:22:46.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.971 "adrfam": "ipv4", 00:22:46.971 "trsvcid": "$NVMF_PORT", 00:22:46.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.971 "hdgst": ${hdgst:-false}, 00:22:46.971 "ddgst": ${ddgst:-false} 00:22:46.971 }, 00:22:46.971 "method": "bdev_nvme_attach_controller" 00:22:46.971 } 00:22:46.971 EOF 00:22:46.971 )") 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:46.971 { 00:22:46.971 "params": { 00:22:46.971 "name": "Nvme$subsystem", 00:22:46.971 "trtype": "$TEST_TRANSPORT", 00:22:46.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.971 "adrfam": "ipv4", 00:22:46.971 "trsvcid": "$NVMF_PORT", 00:22:46.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.971 "hdgst": ${hdgst:-false}, 00:22:46.971 "ddgst": ${ddgst:-false} 00:22:46.971 }, 00:22:46.971 "method": "bdev_nvme_attach_controller" 00:22:46.971 } 00:22:46.971 EOF 00:22:46.971 )") 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:46.971 { 00:22:46.971 "params": { 00:22:46.971 "name": "Nvme$subsystem", 00:22:46.971 "trtype": "$TEST_TRANSPORT", 00:22:46.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.971 "adrfam": "ipv4", 00:22:46.971 "trsvcid": "$NVMF_PORT", 00:22:46.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.971 "hdgst": ${hdgst:-false}, 00:22:46.971 "ddgst": ${ddgst:-false} 00:22:46.971 }, 00:22:46.971 "method": "bdev_nvme_attach_controller" 00:22:46.971 } 00:22:46.971 EOF 00:22:46.971 )") 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:46.971 { 00:22:46.971 "params": { 00:22:46.971 "name": "Nvme$subsystem", 00:22:46.971 "trtype": "$TEST_TRANSPORT", 00:22:46.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.971 "adrfam": "ipv4", 00:22:46.971 "trsvcid": "$NVMF_PORT", 00:22:46.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.971 "hdgst": ${hdgst:-false}, 00:22:46.971 "ddgst": ${ddgst:-false} 00:22:46.971 }, 00:22:46.971 "method": "bdev_nvme_attach_controller" 00:22:46.971 } 00:22:46.971 EOF 00:22:46.971 )") 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:46.971 { 00:22:46.971 "params": { 00:22:46.971 "name": "Nvme$subsystem", 00:22:46.971 "trtype": "$TEST_TRANSPORT", 00:22:46.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.971 "adrfam": "ipv4", 00:22:46.971 "trsvcid": "$NVMF_PORT", 00:22:46.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.971 "hdgst": ${hdgst:-false}, 00:22:46.971 "ddgst": ${ddgst:-false} 00:22:46.971 }, 00:22:46.971 "method": "bdev_nvme_attach_controller" 00:22:46.971 } 00:22:46.971 EOF 00:22:46.971 )") 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:46.971 [2024-12-06 13:30:33.408055] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:22:46.971 [2024-12-06 13:30:33.408110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218134 ] 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:46.971 { 00:22:46.971 "params": { 00:22:46.971 "name": "Nvme$subsystem", 00:22:46.971 "trtype": "$TEST_TRANSPORT", 00:22:46.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.971 "adrfam": "ipv4", 00:22:46.971 "trsvcid": "$NVMF_PORT", 00:22:46.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.971 "hdgst": ${hdgst:-false}, 00:22:46.971 "ddgst": ${ddgst:-false} 00:22:46.971 }, 00:22:46.971 "method": "bdev_nvme_attach_controller" 00:22:46.971 } 00:22:46.971 EOF 00:22:46.971 )") 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:46.971 { 00:22:46.971 "params": { 00:22:46.971 "name": "Nvme$subsystem", 00:22:46.971 "trtype": "$TEST_TRANSPORT", 00:22:46.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.971 "adrfam": "ipv4", 00:22:46.971 "trsvcid": "$NVMF_PORT", 00:22:46.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.971 "hdgst": ${hdgst:-false}, 00:22:46.971 "ddgst": ${ddgst:-false} 00:22:46.971 }, 00:22:46.971 "method": "bdev_nvme_attach_controller" 00:22:46.971 } 00:22:46.971 EOF 00:22:46.971 )") 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:46.971 { 00:22:46.971 "params": { 00:22:46.971 "name": "Nvme$subsystem", 00:22:46.971 "trtype": "$TEST_TRANSPORT", 00:22:46.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.971 "adrfam": "ipv4", 00:22:46.971 "trsvcid": "$NVMF_PORT", 00:22:46.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.971 "hdgst": ${hdgst:-false}, 00:22:46.971 "ddgst": ${ddgst:-false} 00:22:46.971 }, 00:22:46.971 "method": "bdev_nvme_attach_controller" 00:22:46.971 } 00:22:46.971 EOF 00:22:46.971 )") 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:46.971 { 00:22:46.971 "params": { 00:22:46.971 "name": "Nvme$subsystem", 00:22:46.971 "trtype": "$TEST_TRANSPORT", 00:22:46.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.971 "adrfam": "ipv4", 00:22:46.971 "trsvcid": "$NVMF_PORT", 00:22:46.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.971 "hdgst": ${hdgst:-false}, 00:22:46.971 "ddgst": ${ddgst:-false} 00:22:46.971 }, 00:22:46.971 "method": "bdev_nvme_attach_controller" 00:22:46.971 } 00:22:46.971 EOF 00:22:46.971 )") 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:46.971 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:46.971 "params": { 00:22:46.971 "name": "Nvme1", 00:22:46.971 "trtype": "tcp", 00:22:46.971 "traddr": "10.0.0.2", 00:22:46.971 "adrfam": "ipv4", 00:22:46.971 "trsvcid": "4420", 00:22:46.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.971 "hdgst": false, 00:22:46.971 "ddgst": false 00:22:46.971 }, 00:22:46.971 "method": "bdev_nvme_attach_controller" 00:22:46.971 },{ 00:22:46.972 "params": { 00:22:46.972 "name": "Nvme2", 00:22:46.972 "trtype": "tcp", 00:22:46.972 "traddr": "10.0.0.2", 00:22:46.972 "adrfam": "ipv4", 00:22:46.972 "trsvcid": "4420", 00:22:46.972 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:46.972 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:46.972 "hdgst": false, 00:22:46.972 "ddgst": false 00:22:46.972 }, 00:22:46.972 "method": "bdev_nvme_attach_controller" 00:22:46.972 },{ 00:22:46.972 "params": { 00:22:46.972 "name": "Nvme3", 00:22:46.972 "trtype": "tcp", 00:22:46.972 "traddr": "10.0.0.2", 00:22:46.972 "adrfam": "ipv4", 00:22:46.972 "trsvcid": "4420", 00:22:46.972 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:46.972 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:46.972 "hdgst": false, 00:22:46.972 "ddgst": false 00:22:46.972 }, 00:22:46.972 "method": "bdev_nvme_attach_controller" 00:22:46.972 },{ 00:22:46.972 "params": { 00:22:46.972 "name": "Nvme4", 00:22:46.972 "trtype": "tcp", 00:22:46.972 "traddr": "10.0.0.2", 00:22:46.972 "adrfam": "ipv4", 00:22:46.972 "trsvcid": "4420", 00:22:46.972 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:46.972 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:46.972 "hdgst": false, 00:22:46.972 "ddgst": false 00:22:46.972 }, 00:22:46.972 "method": "bdev_nvme_attach_controller" 00:22:46.972 },{ 00:22:46.972 "params": { 00:22:46.972 "name": "Nvme5", 00:22:46.972 "trtype": "tcp", 00:22:46.972 "traddr": "10.0.0.2", 00:22:46.972 "adrfam": "ipv4", 00:22:46.972 "trsvcid": "4420", 00:22:46.972 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:46.972 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:46.972 "hdgst": false, 00:22:46.972 "ddgst": false 00:22:46.972 }, 00:22:46.972 "method": "bdev_nvme_attach_controller" 00:22:46.972 },{ 00:22:46.972 "params": { 00:22:46.972 "name": "Nvme6", 00:22:46.972 "trtype": "tcp", 00:22:46.972 "traddr": "10.0.0.2", 00:22:46.972 "adrfam": "ipv4", 00:22:46.972 "trsvcid": "4420", 00:22:46.972 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:46.972 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:46.972 "hdgst": false, 00:22:46.972 "ddgst": false 00:22:46.972 }, 00:22:46.972 "method": "bdev_nvme_attach_controller" 00:22:46.972 },{ 00:22:46.972 "params": { 00:22:46.972 "name": "Nvme7", 00:22:46.972 "trtype": "tcp", 00:22:46.972 "traddr": "10.0.0.2", 00:22:46.972 "adrfam": "ipv4", 00:22:46.972 "trsvcid": "4420", 00:22:46.972 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:46.972 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:46.972 "hdgst": false, 00:22:46.972 "ddgst": false 00:22:46.972 }, 00:22:46.972 "method": "bdev_nvme_attach_controller" 00:22:46.972 },{ 00:22:46.972 "params": { 00:22:46.972 "name": "Nvme8", 00:22:46.972 "trtype": "tcp", 00:22:46.972 "traddr": "10.0.0.2", 00:22:46.972 "adrfam": "ipv4", 00:22:46.972 "trsvcid": "4420", 00:22:46.972 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:46.972 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:46.972 "hdgst": false, 00:22:46.972 "ddgst": false 00:22:46.972 }, 00:22:46.972 "method": "bdev_nvme_attach_controller" 00:22:46.972 },{ 00:22:46.972 "params": { 00:22:46.972 "name": "Nvme9", 00:22:46.972 "trtype": "tcp", 00:22:46.972 "traddr": "10.0.0.2", 00:22:46.972 "adrfam": "ipv4", 00:22:46.972 "trsvcid": "4420", 00:22:46.972 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:46.972 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:46.972 "hdgst": false, 00:22:46.972 "ddgst": false 00:22:46.972 }, 00:22:46.972 "method": "bdev_nvme_attach_controller" 00:22:46.972 },{ 00:22:46.972 "params": { 00:22:46.972 "name": "Nvme10", 00:22:46.972 "trtype": "tcp", 00:22:46.972 "traddr": "10.0.0.2", 00:22:46.972 "adrfam": "ipv4", 00:22:46.972 "trsvcid": "4420", 00:22:46.972 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:46.972 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:46.972 "hdgst": false, 00:22:46.972 "ddgst": false 00:22:46.972 }, 00:22:46.972 "method": "bdev_nvme_attach_controller" 00:22:46.972 }' 00:22:46.972 [2024-12-06 13:30:33.496111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.972 [2024-12-06 13:30:33.532834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.884 Running I/O for 10 seconds... 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:49.461 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:49.461 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2217753 00:22:49.461 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2217753 ']' 00:22:49.461 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2217753 00:22:49.461 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:49.461 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.461 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2217753 00:22:49.461 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:49.461 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:49.461 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2217753' 00:22:49.461 killing process with pid 2217753 00:22:49.461 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2217753 00:22:49.461 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2217753 00:22:49.461 [2024-12-06 13:30:36.074314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.461 [2024-12-06 13:30:36.074542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.074708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1cda0 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.075981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.462 [2024-12-06 13:30:36.076273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.076279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.076284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.076293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.076298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.076302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.076307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.076312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.076317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.076321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.076326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.076332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.076336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.076342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f970 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.077957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d290 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.078966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.078990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.078996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.079002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.079007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.079018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.079023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.079032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.463 [2024-12-06 13:30:36.079037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.079307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1d760 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.464 [2024-12-06 13:30:36.080312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.080444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1dc50 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.465 [2024-12-06 13:30:36.081410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.081502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e120 is same with the state(6) to be set 00:22:49.466 [2024-12-06 13:30:36.082249] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.466 [2024-12-06 13:30:36.082615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.082985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.082992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.083002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.083009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.083018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.083025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.083035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.083042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.083051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.083059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.083068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.083075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.083084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.083092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.083101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.083108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.083117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.083124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.083134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.083141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.083150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.466 [2024-12-06 13:30:36.083159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.466 [2024-12-06 13:30:36.083168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.467 [2024-12-06 13:30:36.083738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.083767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:49.467 [2024-12-06 13:30:36.084285] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.467 [2024-12-06 13:30:36.084342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.467 [2024-12-06 13:30:36.084355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.084364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.467 [2024-12-06 13:30:36.084372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.084380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.467 [2024-12-06 13:30:36.084388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.084396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.467 [2024-12-06 13:30:36.084406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.084414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86c960 is same with the state(6) to be set 00:22:49.467 [2024-12-06 13:30:36.084441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.467 [2024-12-06 13:30:36.084450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.467 [2024-12-06 13:30:36.084466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.467 [2024-12-06 13:30:36.084474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc979e0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.084538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbea10 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.084641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x869c90 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.084731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d8d0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.084818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d460 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.084918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.468 [2024-12-06 13:30:36.084975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.468 [2024-12-06 13:30:36.084983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc985b0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.468 [2024-12-06 13:30:36.086564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting contro[2024-12-06 13:30:36.086622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with ller 00:22:49.469 the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc985b0 (9): Bad file descriptor 00:22:49.469 [2024-12-06 13:30:36.086651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086702] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.469 [2024-12-06 13:30:36.086707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eac0 is same with the state(6) to be set 00:22:49.469 [2024-12-06 13:30:36.086740] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.469 [2024-12-06 13:30:36.087005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-06 13:30:36.087414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-06 13:30:36.087424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.087708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-06 13:30:36.087715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-06 13:30:36.089155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.470 [2024-12-06 13:30:36.089351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1efb0 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.089997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.090993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.091043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.091092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.091143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.091193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.091247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.091296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.471 [2024-12-06 13:30:36.091346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.472 [2024-12-06 13:30:36.091396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.472 [2024-12-06 13:30:36.091444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.472 [2024-12-06 13:30:36.091500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.472 [2024-12-06 13:30:36.091550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.472 [2024-12-06 13:30:36.091599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.472 [2024-12-06 13:30:36.091648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f480 is same with the state(6) to be set 00:22:49.472 [2024-12-06 13:30:36.104040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-06 13:30:36.104510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.104521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6e590 is same with the state(6) to be set 00:22:49.472 [2024-12-06 13:30:36.105591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.472 [2024-12-06 13:30:36.105631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc985b0 with addr=10.0.0.2, port=4420 00:22:49.472 [2024-12-06 13:30:36.105645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc985b0 is same with the state(6) to be set 00:22:49.472 [2024-12-06 13:30:36.105688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86c960 (9): Bad file descriptor 00:22:49.472 [2024-12-06 13:30:36.105708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc979e0 (9): Bad file descriptor 00:22:49.472 [2024-12-06 13:30:36.105731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbea10 (9): Bad file descriptor 00:22:49.472 [2024-12-06 13:30:36.105777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.472 [2024-12-06 13:30:36.105789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.105799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.472 [2024-12-06 13:30:36.105808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.105817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.472 [2024-12-06 13:30:36.105825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.105833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.472 [2024-12-06 13:30:36.105845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.105852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc1550 is same with the state(6) to be set 00:22:49.472 [2024-12-06 13:30:36.105874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x869c90 (9): Bad file descriptor 00:22:49.472 [2024-12-06 13:30:36.105892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86d8d0 (9): Bad file descriptor 00:22:49.472 [2024-12-06 13:30:36.105915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86d460 (9): Bad file descriptor 00:22:49.472 [2024-12-06 13:30:36.105950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.472 [2024-12-06 13:30:36.105961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.105970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.472 [2024-12-06 13:30:36.105978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.105987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.472 [2024-12-06 13:30:36.105995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-06 13:30:36.106003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.472 [2024-12-06 13:30:36.106010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.106018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8e830 is same with the state(6) to be set 00:22:49.473 [2024-12-06 13:30:36.106045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.473 [2024-12-06 13:30:36.106055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.106064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.473 [2024-12-06 13:30:36.106072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.106080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.473 [2024-12-06 13:30:36.106088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.106096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.473 [2024-12-06 13:30:36.106104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.106111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785610 is same with the state(6) to be set 00:22:49.473 [2024-12-06 13:30:36.106131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc985b0 (9): Bad file descriptor 00:22:49.473 [2024-12-06 13:30:36.107480] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.473 [2024-12-06 13:30:36.108328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:49.473 [2024-12-06 13:30:36.108577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.473 [2024-12-06 13:30:36.108603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.108620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.473 [2024-12-06 13:30:36.108629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.108641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.473 [2024-12-06 13:30:36.108649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.108661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.473 [2024-12-06 13:30:36.108671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.108682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.473 [2024-12-06 13:30:36.108690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.108701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.473 [2024-12-06 13:30:36.108710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.108723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.473 [2024-12-06 13:30:36.108731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.108743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.473 [2024-12-06 13:30:36.108752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.108764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.473 [2024-12-06 13:30:36.108773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.473 [2024-12-06 13:30:36.108783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc71d60 is same with the state(6) to be set 00:22:49.473 [2024-12-06 13:30:36.109266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.473 [2024-12-06 13:30:36.109288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x86c960 with addr=10.0.0.2, port=4420 00:22:49.473 [2024-12-06 13:30:36.109298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86c960 is same with the state(6) to be set 00:22:49.473 [2024-12-06 13:30:36.109309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:49.473 [2024-12-06 13:30:36.109318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:49.473 [2024-12-06 13:30:36.109329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:49.473 [2024-12-06 13:30:36.109340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:49.473 [2024-12-06 13:30:36.110839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:49.473 [2024-12-06 13:30:36.110874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8e830 (9): Bad file descriptor 00:22:49.473 [2024-12-06 13:30:36.110889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86c960 (9): Bad file descriptor 00:22:49.741 [2024-12-06 13:30:36.110993] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.741 [2024-12-06 13:30:36.111039] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:49.741 [2024-12-06 13:30:36.111069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:49.741 [2024-12-06 13:30:36.111078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:49.741 [2024-12-06 13:30:36.111087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:49.741 [2024-12-06 13:30:36.111096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:49.741 [2024-12-06 13:30:36.111737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.741 [2024-12-06 13:30:36.111778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8e830 with addr=10.0.0.2, port=4420 00:22:49.741 [2024-12-06 13:30:36.111796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8e830 is same with the state(6) to be set 00:22:49.741 [2024-12-06 13:30:36.111879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8e830 (9): Bad file descriptor 00:22:49.741 [2024-12-06 13:30:36.111940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:49.741 [2024-12-06 13:30:36.111950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:49.741 [2024-12-06 13:30:36.111960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:49.741 [2024-12-06 13:30:36.111969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:49.741 [2024-12-06 13:30:36.115250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc1550 (9): Bad file descriptor 00:22:49.741 [2024-12-06 13:30:36.115301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x785610 (9): Bad file descriptor 00:22:49.741 [2024-12-06 13:30:36.115434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.741 [2024-12-06 13:30:36.115450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.741 [2024-12-06 13:30:36.115472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.741 [2024-12-06 13:30:36.115482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.741 [2024-12-06 13:30:36.115494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.741 [2024-12-06 13:30:36.115503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.741 [2024-12-06 13:30:36.115514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.741 [2024-12-06 13:30:36.115524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.741 [2024-12-06 13:30:36.115535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.741 [2024-12-06 13:30:36.115544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.741 [2024-12-06 13:30:36.115561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.741 [2024-12-06 13:30:36.115570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.741 [2024-12-06 13:30:36.115581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.741 [2024-12-06 13:30:36.115590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.741 [2024-12-06 13:30:36.115602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.741 [2024-12-06 13:30:36.115611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.741 [2024-12-06 13:30:36.115623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.741 [2024-12-06 13:30:36.115633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.741 [2024-12-06 13:30:36.115644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.741 [2024-12-06 13:30:36.115653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.741 [2024-12-06 13:30:36.115665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.741 [2024-12-06 13:30:36.115673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.741 [2024-12-06 13:30:36.115685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.741 [2024-12-06 13:30:36.115694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.741 [2024-12-06 13:30:36.115705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.741 [2024-12-06 13:30:36.115715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.741 [2024-12-06 13:30:36.115727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.115736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.115747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.115757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.115768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.115776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.115789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.115798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.115809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.115820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.115831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.115842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.115853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.115862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.115873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.115882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.115894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.115903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.115914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.115923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.115934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.115944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.115956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.115965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.115976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.115985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.115997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.742 [2024-12-06 13:30:36.116531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.742 [2024-12-06 13:30:36.116541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.116552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.116560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.116571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.116580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.116593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.116602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.116613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.116622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.116633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.116642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.116653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.116662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.116673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.116682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.116693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.116702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.116713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.116722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.116733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.116742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.116753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.116762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.116772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5c350 is same with the state(6) to be set 00:22:49.743 [2024-12-06 13:30:36.118172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.743 [2024-12-06 13:30:36.118696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.743 [2024-12-06 13:30:36.118706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.118990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.118997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.119338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.119347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa72360 is same with the state(6) to be set 00:22:49.744 [2024-12-06 13:30:36.120626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.120641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.120655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.120667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.120679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.744 [2024-12-06 13:30:36.120689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.744 [2024-12-06 13:30:36.120701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.120988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.120998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.745 [2024-12-06 13:30:36.121308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.745 [2024-12-06 13:30:36.121318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.121797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.121807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa73450 is same with the state(6) to be set 00:22:49.746 [2024-12-06 13:30:36.123081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.123098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.123112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.123122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.123134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.123144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.123155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.123164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.123176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.123184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.123194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.123201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.123211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.123219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.123229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.123237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.123247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.123255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.123265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.123273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.123282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.123290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.123300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.123308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.746 [2024-12-06 13:30:36.123323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.746 [2024-12-06 13:30:36.123331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.123987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.123997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.124007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.124017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.124025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.124035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.124043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.747 [2024-12-06 13:30:36.124053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.747 [2024-12-06 13:30:36.124061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.124071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.124078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.124088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.124096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.124106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.124114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.124124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.124132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.124142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.124150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.124161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.124169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.124178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.124186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.124196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.124203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.124213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.124221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.124232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.124240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.124250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.124258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.124266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6f7e0 is same with the state(6) to be set 00:22:49.748 [2024-12-06 13:30:36.125556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.125979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.125992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.126000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.126011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.126020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.126030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.126038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.126048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.126056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.126066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.748 [2024-12-06 13:30:36.126074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.748 [2024-12-06 13:30:36.126084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.749 [2024-12-06 13:30:36.126735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.749 [2024-12-06 13:30:36.126743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc75310 is same with the state(6) to be set 00:22:49.749 [2024-12-06 13:30:36.128001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:49.749 [2024-12-06 13:30:36.128025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:49.749 [2024-12-06 13:30:36.128040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:49.749 [2024-12-06 13:30:36.128053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:49.749 [2024-12-06 13:30:36.128128] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:49.749 [2024-12-06 13:30:36.128143] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:49.749 [2024-12-06 13:30:36.128231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:49.749 [2024-12-06 13:30:36.128246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:49.749 [2024-12-06 13:30:36.128771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.750 [2024-12-06 13:30:36.128811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc985b0 with addr=10.0.0.2, port=4420 00:22:49.750 [2024-12-06 13:30:36.128824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc985b0 is same with the state(6) to be set 00:22:49.750 [2024-12-06 13:30:36.129166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.750 [2024-12-06 13:30:36.129179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x86d8d0 with addr=10.0.0.2, port=4420 00:22:49.750 [2024-12-06 13:30:36.129188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d8d0 is same with the state(6) to be set 00:22:49.750 [2024-12-06 13:30:36.129660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.750 [2024-12-06 13:30:36.129698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x86d460 with addr=10.0.0.2, port=4420 00:22:49.750 [2024-12-06 13:30:36.129711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d460 is same with the state(6) to be set 00:22:49.750 [2024-12-06 13:30:36.129902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.750 [2024-12-06 13:30:36.129914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x869c90 with addr=10.0.0.2, port=4420 00:22:49.750 [2024-12-06 13:30:36.129922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x869c90 is same with the state(6) to be set 00:22:49.750 [2024-12-06 13:30:36.131038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.750 [2024-12-06 13:30:36.131605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.750 [2024-12-06 13:30:36.131613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.131989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.131997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.132007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.132016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.132026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.132034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.132043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.132051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.132061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.132069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.132079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.132087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.132096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.132105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.132115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.132123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.132133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.132141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.132151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.132159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.132169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.132176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.132186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.132195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.132205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.132213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.132221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc73020 is same with the state(6) to be set 00:22:49.751 [2024-12-06 13:30:36.133520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.133535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.133550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.133560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.133573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.133582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.133594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.133604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.133616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.133626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.133638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.751 [2024-12-06 13:30:36.133648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.751 [2024-12-06 13:30:36.133659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.133983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.133994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.752 [2024-12-06 13:30:36.134389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.752 [2024-12-06 13:30:36.134399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.753 [2024-12-06 13:30:36.134734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.753 [2024-12-06 13:30:36.134743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc74060 is same with the state(6) to be set 00:22:49.753 [2024-12-06 13:30:36.137337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:49.753 [2024-12-06 13:30:36.137364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:49.753 [2024-12-06 13:30:36.137378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:49.753 task offset: 20224 on job bdev=Nvme6n1 fails 00:22:49.753 00:22:49.753 Latency(us) 00:22:49.753 [2024-12-06T12:30:36.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.753 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.753 Job: Nvme1n1 ended in about 0.84 seconds with error 00:22:49.753 Verification LBA range: start 0x0 length 0x400 00:22:49.753 Nvme1n1 : 0.84 152.78 9.55 76.39 0.00 275749.83 16384.00 244667.73 00:22:49.753 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.753 Job: Nvme2n1 ended in about 0.84 seconds with error 00:22:49.753 Verification LBA range: start 0x0 length 0x400 00:22:49.753 Nvme2n1 : 0.84 152.32 9.52 76.16 0.00 270103.32 21626.88 228939.09 00:22:49.753 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.753 Job: Nvme3n1 ended in about 0.84 seconds with error 00:22:49.753 Verification LBA range: start 0x0 length 0x400 00:22:49.753 Nvme3n1 : 0.84 153.07 9.57 75.94 0.00 263095.91 20206.93 237677.23 00:22:49.753 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.753 Job: Nvme4n1 ended in about 0.83 seconds with error 00:22:49.753 Verification LBA range: start 0x0 length 0x400 00:22:49.753 Nvme4n1 : 0.83 160.78 10.05 77.37 0.00 246357.14 19442.35 249910.61 00:22:49.753 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.753 Job: Nvme5n1 ended in about 0.85 seconds with error 00:22:49.753 Verification LBA range: start 0x0 length 0x400 00:22:49.753 Nvme5n1 : 0.85 151.44 9.46 75.72 0.00 252345.74 19333.12 246415.36 00:22:49.753 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.753 Job: Nvme6n1 ended in about 0.81 seconds with error 00:22:49.753 Verification LBA range: start 0x0 length 0x400 00:22:49.753 Nvme6n1 : 0.81 158.77 9.92 79.39 0.00 232860.52 2908.16 274377.39 00:22:49.753 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.753 Job: Nvme7n1 ended in about 0.83 seconds with error 00:22:49.753 Verification LBA range: start 0x0 length 0x400 00:22:49.753 Nvme7n1 : 0.83 230.01 14.38 10.84 0.00 224654.95 17367.04 241172.48 00:22:49.753 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.753 Job: Nvme8n1 ended in about 0.85 seconds with error 00:22:49.753 Verification LBA range: start 0x0 length 0x400 00:22:49.753 Nvme8n1 : 0.85 225.04 14.07 75.01 0.00 176631.25 10267.31 253405.87 00:22:49.753 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.753 Job: Nvme9n1 ended in about 0.86 seconds with error 00:22:49.753 Verification LBA range: start 0x0 length 0x400 00:22:49.753 Nvme9n1 : 0.86 149.59 9.35 74.79 0.00 229995.24 20425.39 249910.61 00:22:49.753 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:49.753 Job: Nvme10n1 ended in about 0.85 seconds with error 00:22:49.753 Verification LBA range: start 0x0 length 0x400 00:22:49.753 Nvme10n1 : 0.85 151.00 9.44 75.50 0.00 220832.43 31457.28 242920.11 00:22:49.753 [2024-12-06T12:30:36.412Z] =================================================================================================================== 00:22:49.753 [2024-12-06T12:30:36.412Z] Total : 1684.79 105.30 697.11 0.00 237227.62 2908.16 274377.39 00:22:49.753 [2024-12-06 13:30:36.165492] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:49.753 [2024-12-06 13:30:36.165525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:49.753 [2024-12-06 13:30:36.165889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.753 [2024-12-06 13:30:36.165907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc979e0 with addr=10.0.0.2, port=4420 00:22:49.753 [2024-12-06 13:30:36.165918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc979e0 is same with the state(6) to be set 00:22:49.753 [2024-12-06 13:30:36.166097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.753 [2024-12-06 13:30:36.166109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbea10 with addr=10.0.0.2, port=4420 00:22:49.753 [2024-12-06 13:30:36.166116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbea10 is same with the state(6) to be set 00:22:49.753 [2024-12-06 13:30:36.166130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc985b0 (9): Bad file descriptor 00:22:49.753 [2024-12-06 13:30:36.166143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86d8d0 (9): Bad file descriptor 00:22:49.753 [2024-12-06 13:30:36.166154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86d460 (9): Bad file descriptor 00:22:49.753 [2024-12-06 13:30:36.166164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x869c90 (9): Bad file descriptor 00:22:49.753 [2024-12-06 13:30:36.166609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.753 [2024-12-06 13:30:36.166626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x86c960 with addr=10.0.0.2, port=4420 00:22:49.753 [2024-12-06 13:30:36.166634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86c960 is same with the state(6) to be set 00:22:49.753 [2024-12-06 13:30:36.166963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.753 [2024-12-06 13:30:36.166974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc8e830 with addr=10.0.0.2, port=4420 00:22:49.754 [2024-12-06 13:30:36.166982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8e830 is same with the state(6) to be set 00:22:49.754 [2024-12-06 13:30:36.167154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.754 [2024-12-06 13:30:36.167165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x785610 with addr=10.0.0.2, port=4420 00:22:49.754 [2024-12-06 13:30:36.167173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x785610 is same with the state(6) to be set 00:22:49.754 [2024-12-06 13:30:36.167546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.754 [2024-12-06 13:30:36.167558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc1550 with addr=10.0.0.2, port=4420 00:22:49.754 [2024-12-06 13:30:36.167566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc1550 is same with the state(6) to be set 00:22:49.754 [2024-12-06 13:30:36.167579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc979e0 (9): Bad file descriptor 00:22:49.754 [2024-12-06 13:30:36.167589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbea10 (9): Bad file descriptor 00:22:49.754 [2024-12-06 13:30:36.167599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:49.754 [2024-12-06 13:30:36.167606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:49.754 [2024-12-06 13:30:36.167615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:49.754 [2024-12-06 13:30:36.167625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:49.754 [2024-12-06 13:30:36.167633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:49.754 [2024-12-06 13:30:36.167640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:49.754 [2024-12-06 13:30:36.167647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:49.754 [2024-12-06 13:30:36.167654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:49.754 [2024-12-06 13:30:36.167662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:49.754 [2024-12-06 13:30:36.167669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:49.754 [2024-12-06 13:30:36.167676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:49.754 [2024-12-06 13:30:36.167683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:49.754 [2024-12-06 13:30:36.167691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:49.754 [2024-12-06 13:30:36.167697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:49.754 [2024-12-06 13:30:36.167704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:49.754 [2024-12-06 13:30:36.167712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:49.754 [2024-12-06 13:30:36.167754] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:49.754 [2024-12-06 13:30:36.167768] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:49.754 [2024-12-06 13:30:36.168392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86c960 (9): Bad file descriptor 00:22:49.754 [2024-12-06 13:30:36.168407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8e830 (9): Bad file descriptor 00:22:49.754 [2024-12-06 13:30:36.168417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x785610 (9): Bad file descriptor 00:22:49.754 [2024-12-06 13:30:36.168427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc1550 (9): Bad file descriptor 00:22:49.754 [2024-12-06 13:30:36.168436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:49.754 [2024-12-06 13:30:36.168443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:49.754 [2024-12-06 13:30:36.168450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:49.754 [2024-12-06 13:30:36.168463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:49.754 [2024-12-06 13:30:36.168474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:49.754 [2024-12-06 13:30:36.168482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:49.754 [2024-12-06 13:30:36.168490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:49.754 [2024-12-06 13:30:36.168497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:49.754 [2024-12-06 13:30:36.168544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:49.754 [2024-12-06 13:30:36.168557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:49.754 [2024-12-06 13:30:36.168567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:49.754 [2024-12-06 13:30:36.168576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:49.754 [2024-12-06 13:30:36.168608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:49.754 [2024-12-06 13:30:36.168616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:49.754 [2024-12-06 13:30:36.168625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:49.754 [2024-12-06 13:30:36.168632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:49.754 [2024-12-06 13:30:36.168639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:49.754 [2024-12-06 13:30:36.168646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:49.754 [2024-12-06 13:30:36.168653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:49.754 [2024-12-06 13:30:36.168660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:49.754 [2024-12-06 13:30:36.168668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:49.754 [2024-12-06 13:30:36.168674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:49.754 [2024-12-06 13:30:36.168681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:49.754 [2024-12-06 13:30:36.168688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:49.754 [2024-12-06 13:30:36.168695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:49.754 [2024-12-06 13:30:36.168702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:49.754 [2024-12-06 13:30:36.168709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:49.754 [2024-12-06 13:30:36.168715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:49.754 [2024-12-06 13:30:36.169065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.754 [2024-12-06 13:30:36.169080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x869c90 with addr=10.0.0.2, port=4420 00:22:49.754 [2024-12-06 13:30:36.169089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x869c90 is same with the state(6) to be set 00:22:49.754 [2024-12-06 13:30:36.169406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.754 [2024-12-06 13:30:36.169417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x86d460 with addr=10.0.0.2, port=4420 00:22:49.754 [2024-12-06 13:30:36.169427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d460 is same with the state(6) to be set 00:22:49.754 [2024-12-06 13:30:36.169746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.754 [2024-12-06 13:30:36.169757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x86d8d0 with addr=10.0.0.2, port=4420 00:22:49.754 [2024-12-06 13:30:36.169766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d8d0 is same with the state(6) to be set 00:22:49.754 [2024-12-06 13:30:36.169953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.754 [2024-12-06 13:30:36.169967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc985b0 with addr=10.0.0.2, port=4420 00:22:49.754 [2024-12-06 13:30:36.169974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc985b0 is same with the state(6) to be set 00:22:49.754 [2024-12-06 13:30:36.170004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x869c90 (9): Bad file descriptor 00:22:49.754 [2024-12-06 13:30:36.170015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86d460 (9): Bad file descriptor 00:22:49.754 [2024-12-06 13:30:36.170024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86d8d0 (9): Bad file descriptor 00:22:49.754 [2024-12-06 13:30:36.170034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc985b0 (9): Bad file descriptor 00:22:49.754 [2024-12-06 13:30:36.170060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:49.754 [2024-12-06 13:30:36.170068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:49.754 [2024-12-06 13:30:36.170076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:49.754 [2024-12-06 13:30:36.170084] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:49.754 [2024-12-06 13:30:36.170091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:49.755 [2024-12-06 13:30:36.170098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:49.755 [2024-12-06 13:30:36.170105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:49.755 [2024-12-06 13:30:36.170111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:49.755 [2024-12-06 13:30:36.170118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:49.755 [2024-12-06 13:30:36.170125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:49.755 [2024-12-06 13:30:36.170132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:49.755 [2024-12-06 13:30:36.170138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:49.755 [2024-12-06 13:30:36.170146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:49.755 [2024-12-06 13:30:36.170153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:49.755 [2024-12-06 13:30:36.170160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:49.755 [2024-12-06 13:30:36.170167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:49.755 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:50.694 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2218134 00:22:50.694 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:50.694 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2218134 00:22:50.694 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:50.694 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.694 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:50.694 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.695 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2218134 00:22:50.695 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:50.695 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:50.695 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:50.695 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:50.695 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:50.695 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:50.695 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:50.695 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:50.695 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:50.695 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.956 rmmod nvme_tcp 00:22:50.956 rmmod nvme_fabrics 00:22:50.956 rmmod nvme_keyring 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2217753 ']' 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2217753 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2217753 ']' 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2217753 00:22:50.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2217753) - No such process 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2217753 is not found' 00:22:50.956 Process with pid 2217753 is not found 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.956 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.869 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:52.869 00:22:52.869 real 0m7.907s 00:22:52.869 user 0m19.963s 00:22:52.869 sys 0m1.207s 00:22:52.869 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.869 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.869 ************************************ 00:22:52.869 END TEST nvmf_shutdown_tc3 00:22:52.869 ************************************ 00:22:53.130 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:53.130 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:53.131 ************************************ 00:22:53.131 START TEST nvmf_shutdown_tc4 00:22:53.131 ************************************ 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:53.131 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:53.131 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:53.131 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:53.131 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:53.131 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:53.132 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:53.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:22:53.393 00:22:53.393 --- 10.0.0.2 ping statistics --- 00:22:53.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.393 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:22:53.393 00:22:53.393 --- 10.0.0.1 ping statistics --- 00:22:53.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.393 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2219434 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2219434 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2219434 ']' 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.393 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:53.393 [2024-12-06 13:30:39.978218] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:22:53.393 [2024-12-06 13:30:39.978269] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.393 [2024-12-06 13:30:40.043513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.653 [2024-12-06 13:30:40.074592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.653 [2024-12-06 13:30:40.074622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.653 [2024-12-06 13:30:40.074628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.653 [2024-12-06 13:30:40.074633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.653 [2024-12-06 13:30:40.074638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.653 [2024-12-06 13:30:40.075880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.653 [2024-12-06 13:30:40.076033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.653 [2024-12-06 13:30:40.076184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.654 [2024-12-06 13:30:40.076186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:54.224 [2024-12-06 13:30:40.816039] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:54.224 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.485 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:54.485 Malloc1 00:22:54.485 [2024-12-06 13:30:40.923188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.485 Malloc2 00:22:54.485 Malloc3 00:22:54.485 Malloc4 00:22:54.485 Malloc5 00:22:54.485 Malloc6 00:22:54.485 Malloc7 00:22:54.746 Malloc8 00:22:54.746 Malloc9 00:22:54.746 Malloc10 00:22:54.746 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.746 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:54.746 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:54.746 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:54.746 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2219668 00:22:54.746 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:54.747 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:54.747 [2024-12-06 13:30:41.402923] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:00.034 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.034 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2219434 00:23:00.034 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2219434 ']' 00:23:00.034 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2219434 00:23:00.034 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:00.034 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.034 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2219434 00:23:00.034 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:00.034 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:00.034 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2219434' 00:23:00.034 killing process with pid 2219434 00:23:00.034 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2219434 00:23:00.034 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2219434 00:23:00.034 [2024-12-06 13:30:46.404263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaeee0 is same with the state(6) to be set 00:23:00.034 [2024-12-06 13:30:46.404307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaeee0 is same with the state(6) to be set 00:23:00.034 [2024-12-06 13:30:46.404314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaeee0 is same with the state(6) to be set 00:23:00.034 [2024-12-06 13:30:46.404319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaeee0 is same with the state(6) to be set 00:23:00.034 [2024-12-06 13:30:46.404324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaeee0 is same with the state(6) to be set 00:23:00.034 [2024-12-06 13:30:46.404329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaeee0 is same with the state(6) to be set 00:23:00.034 [2024-12-06 13:30:46.404334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaeee0 is same with the state(6) to be set 00:23:00.034 [2024-12-06 13:30:46.404339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaeee0 is same with the state(6) to be set 00:23:00.034 [2024-12-06 13:30:46.404650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf3d0 is same with the state(6) to be set 00:23:00.034 [2024-12-06 13:30:46.404675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf3d0 is same with the state(6) to be set 00:23:00.034 [2024-12-06 13:30:46.404681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf3d0 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.404686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf3d0 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.404692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf3d0 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.404697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf3d0 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.404875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eae500 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.404895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eae500 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.404901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eae500 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.404907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eae500 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.404912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eae500 is same with the state(6) to be set 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 [2024-12-06 13:30:46.405143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0fe0 is same with the state(6) to be set 00:23:00.035 starting I/O failed: -6 00:23:00.035 [2024-12-06 13:30:46.405164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0fe0 is same with the state(6) to be set 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 [2024-12-06 13:30:46.405170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0fe0 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0fe0 is same with the state(6) to be set 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 [2024-12-06 13:30:46.405378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb14d0 is same with Write completed with error (sct=0, sc=8) 00:23:00.035 the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb14d0 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb14d0 is same with the state(6) to be set 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 [2024-12-06 13:30:46.405406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb14d0 is same with the state(6) to be set 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 [2024-12-06 13:30:46.405621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb19a0 is same with Write completed with error (sct=0, sc=8) 00:23:00.035 the state(6) to be set 00:23:00.035 starting I/O failed: -6 00:23:00.035 [2024-12-06 13:30:46.405637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb19a0 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb19a0 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb19a0 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb19a0 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb19a0 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb19a0 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb19a0 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 [2024-12-06 13:30:46.405930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0b10 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0b10 is same with Write completed with error (sct=0, sc=8) 00:23:00.035 the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0b10 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0b10 is same with the state(6) to be set 00:23:00.035 [2024-12-06 13:30:46.405960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0b10 is same with the state(6) to be set 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 [2024-12-06 13:30:46.406496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.035 starting I/O failed: -6 00:23:00.035 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 [2024-12-06 13:30:46.407432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 [2024-12-06 13:30:46.408853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:00.036 NVMe io qpair process completion error 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 [2024-12-06 13:30:46.409423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0130 is same with the state(6) to be set 00:23:00.036 Write completed with error (sct=0, sc=8) 00:23:00.036 starting I/O failed: -6 00:23:00.037 [2024-12-06 13:30:46.409440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0130 is same with the state(6) to be set 00:23:00.037 [2024-12-06 13:30:46.409445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0130 is same with the state(6) to be set 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 [2024-12-06 13:30:46.409451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0130 is same with the state(6) to be set 00:23:00.037 [2024-12-06 13:30:46.409469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0130 is same with the state(6) to be set 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 [2024-12-06 13:30:46.409476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0130 is same with the state(6) to be set 00:23:00.037 [2024-12-06 13:30:46.409481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0130 is same with Write completed with error (sct=0, sc=8) 00:23:00.037 the state(6) to be set 00:23:00.037 [2024-12-06 13:30:46.409498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb0130 is same with the state(6) to be set 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 [2024-12-06 13:30:46.409839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 [2024-12-06 13:30:46.410642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 [2024-12-06 13:30:46.411559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.037 starting I/O failed: -6 00:23:00.037 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 [2024-12-06 13:30:46.413153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:00.038 NVMe io qpair process completion error 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 [2024-12-06 13:30:46.414274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 starting I/O failed: -6 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.038 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 [2024-12-06 13:30:46.415152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 [2024-12-06 13:30:46.416062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.039 starting I/O failed: -6 00:23:00.039 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 [2024-12-06 13:30:46.418568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:00.040 NVMe io qpair process completion error 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 [2024-12-06 13:30:46.419680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 [2024-12-06 13:30:46.420575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.040 Write completed with error (sct=0, sc=8) 00:23:00.040 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 [2024-12-06 13:30:46.421517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 [2024-12-06 13:30:46.423309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:00.041 NVMe io qpair process completion error 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 starting I/O failed: -6 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.041 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 [2024-12-06 13:30:46.424386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 [2024-12-06 13:30:46.425244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 [2024-12-06 13:30:46.426178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.042 starting I/O failed: -6 00:23:00.042 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 [2024-12-06 13:30:46.427814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:00.043 NVMe io qpair process completion error 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 [2024-12-06 13:30:46.428950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:00.043 starting I/O failed: -6 00:23:00.043 starting I/O failed: -6 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 [2024-12-06 13:30:46.429915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.043 Write completed with error (sct=0, sc=8) 00:23:00.043 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 [2024-12-06 13:30:46.430847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 [2024-12-06 13:30:46.433743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:00.044 NVMe io qpair process completion error 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 starting I/O failed: -6 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.044 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 [2024-12-06 13:30:46.434873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 [2024-12-06 13:30:46.435704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 [2024-12-06 13:30:46.436613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.045 starting I/O failed: -6 00:23:00.045 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 [2024-12-06 13:30:46.438056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:00.046 NVMe io qpair process completion error 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 [2024-12-06 13:30:46.439249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:00.046 starting I/O failed: -6 00:23:00.046 starting I/O failed: -6 00:23:00.046 starting I/O failed: -6 00:23:00.046 starting I/O failed: -6 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 [2024-12-06 13:30:46.440264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:00.046 starting I/O failed: -6 00:23:00.046 starting I/O failed: -6 00:23:00.046 starting I/O failed: -6 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 starting I/O failed: -6 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.046 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 [2024-12-06 13:30:46.441233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 [2024-12-06 13:30:46.443347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:00.047 NVMe io qpair process completion error 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.047 starting I/O failed: -6 00:23:00.047 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 [2024-12-06 13:30:46.444657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 [2024-12-06 13:30:46.445488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 [2024-12-06 13:30:46.446409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.048 Write completed with error (sct=0, sc=8) 00:23:00.048 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 [2024-12-06 13:30:46.448307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:00.049 NVMe io qpair process completion error 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 Write completed with error (sct=0, sc=8) 00:23:00.049 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 [2024-12-06 13:30:46.450121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 [2024-12-06 13:30:46.451077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.050 starting I/O failed: -6 00:23:00.050 Write completed with error (sct=0, sc=8) 00:23:00.051 starting I/O failed: -6 00:23:00.051 Write completed with error (sct=0, sc=8) 00:23:00.051 starting I/O failed: -6 00:23:00.051 Write completed with error (sct=0, sc=8) 00:23:00.051 starting I/O failed: -6 00:23:00.051 Write completed with error (sct=0, sc=8) 00:23:00.051 starting I/O failed: -6 00:23:00.051 Write completed with error (sct=0, sc=8) 00:23:00.051 starting I/O failed: -6 00:23:00.051 Write completed with error (sct=0, sc=8) 00:23:00.051 starting I/O failed: -6 00:23:00.051 Write completed with error (sct=0, sc=8) 00:23:00.051 starting I/O failed: -6 00:23:00.051 [2024-12-06 13:30:46.453572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:00.051 NVMe io qpair process completion error 00:23:00.051 Initializing NVMe Controllers 00:23:00.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:00.051 Controller IO queue size 128, less than required. 00:23:00.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:00.051 Controller IO queue size 128, less than required. 00:23:00.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:00.051 Controller IO queue size 128, less than required. 00:23:00.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:00.051 Controller IO queue size 128, less than required. 00:23:00.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:00.051 Controller IO queue size 128, less than required. 00:23:00.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:00.051 Controller IO queue size 128, less than required. 00:23:00.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:00.051 Controller IO queue size 128, less than required. 00:23:00.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:00.051 Controller IO queue size 128, less than required. 00:23:00.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.051 Controller IO queue size 128, less than required. 00:23:00.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:00.051 Controller IO queue size 128, less than required. 00:23:00.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:00.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:00.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:00.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:00.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:00.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:00.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:00.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:00.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:00.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:00.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:00.051 Initialization complete. Launching workers. 00:23:00.051 ======================================================== 00:23:00.051 Latency(us) 00:23:00.051 Device Information : IOPS MiB/s Average min max 00:23:00.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1842.37 79.16 69491.66 849.70 123853.13 00:23:00.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1881.29 80.84 68076.21 540.53 128950.88 00:23:00.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1871.77 80.43 68455.23 780.77 130461.41 00:23:00.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1858.24 79.85 68974.72 904.49 132639.02 00:23:00.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1854.43 79.68 68435.98 888.75 119563.06 00:23:00.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1899.69 81.63 66820.25 851.14 123389.87 00:23:00.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1876.43 80.63 67669.06 923.55 118864.10 00:23:00.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1874.95 80.56 67756.16 907.90 119495.07 00:23:00.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1876.21 80.62 67733.56 857.73 125479.15 00:23:00.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1903.08 81.77 66799.91 722.92 125442.14 00:23:00.051 ======================================================== 00:23:00.051 Total : 18738.46 805.17 68013.89 540.53 132639.02 00:23:00.051 00:23:00.051 [2024-12-06 13:30:46.458180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2ef0 is same with the state(6) to be set 00:23:00.051 [2024-12-06 13:30:46.458225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2890 is same with the state(6) to be set 00:23:00.051 [2024-12-06 13:30:46.458255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3a70 is same with the state(6) to be set 00:23:00.051 [2024-12-06 13:30:46.458285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2bc0 is same with the state(6) to be set 00:23:00.051 [2024-12-06 13:30:46.458316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4900 is same with the state(6) to be set 00:23:00.051 [2024-12-06 13:30:46.458346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4ae0 is same with the state(6) to be set 00:23:00.051 [2024-12-06 13:30:46.458375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3740 is same with the state(6) to be set 00:23:00.051 [2024-12-06 13:30:46.458404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e3410 is same with the state(6) to be set 00:23:00.051 [2024-12-06 13:30:46.458432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e4720 is same with the state(6) to be set 00:23:00.051 [2024-12-06 13:30:46.458474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2560 is same with the state(6) to be set 00:23:00.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:00.051 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:01.008 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2219668 00:23:01.008 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:01.008 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2219668 00:23:01.008 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:01.008 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2219668 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:01.009 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:01.009 rmmod nvme_tcp 00:23:01.269 rmmod nvme_fabrics 00:23:01.269 rmmod nvme_keyring 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2219434 ']' 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2219434 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2219434 ']' 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2219434 00:23:01.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2219434) - No such process 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2219434 is not found' 00:23:01.269 Process with pid 2219434 is not found 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.269 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.179 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:03.179 00:23:03.179 real 0m10.222s 00:23:03.179 user 0m27.966s 00:23:03.179 sys 0m4.007s 00:23:03.179 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:03.179 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:03.179 ************************************ 00:23:03.179 END TEST nvmf_shutdown_tc4 00:23:03.179 ************************************ 00:23:03.440 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:03.440 00:23:03.440 real 0m43.700s 00:23:03.440 user 1m47.015s 00:23:03.440 sys 0m13.794s 00:23:03.441 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:03.441 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:03.441 ************************************ 00:23:03.441 END TEST nvmf_shutdown 00:23:03.441 ************************************ 00:23:03.441 13:30:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:03.441 13:30:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:03.441 13:30:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:03.441 13:30:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:03.441 ************************************ 00:23:03.441 START TEST nvmf_nsid 00:23:03.441 ************************************ 00:23:03.441 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:03.441 * Looking for test storage... 00:23:03.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:03.441 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:03.441 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:23:03.441 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:03.702 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:03.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.702 --rc genhtml_branch_coverage=1 00:23:03.702 --rc genhtml_function_coverage=1 00:23:03.702 --rc genhtml_legend=1 00:23:03.702 --rc geninfo_all_blocks=1 00:23:03.702 --rc geninfo_unexecuted_blocks=1 00:23:03.702 00:23:03.702 ' 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:03.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.703 --rc genhtml_branch_coverage=1 00:23:03.703 --rc genhtml_function_coverage=1 00:23:03.703 --rc genhtml_legend=1 00:23:03.703 --rc geninfo_all_blocks=1 00:23:03.703 --rc geninfo_unexecuted_blocks=1 00:23:03.703 00:23:03.703 ' 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:03.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.703 --rc genhtml_branch_coverage=1 00:23:03.703 --rc genhtml_function_coverage=1 00:23:03.703 --rc genhtml_legend=1 00:23:03.703 --rc geninfo_all_blocks=1 00:23:03.703 --rc geninfo_unexecuted_blocks=1 00:23:03.703 00:23:03.703 ' 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:03.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.703 --rc genhtml_branch_coverage=1 00:23:03.703 --rc genhtml_function_coverage=1 00:23:03.703 --rc genhtml_legend=1 00:23:03.703 --rc geninfo_all_blocks=1 00:23:03.703 --rc geninfo_unexecuted_blocks=1 00:23:03.703 00:23:03.703 ' 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:03.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:03.703 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:11.848 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:11.848 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.848 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:11.849 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:11.849 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:11.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:23:11.849 00:23:11.849 --- 10.0.0.2 ping statistics --- 00:23:11.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.849 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:23:11.849 00:23:11.849 --- 10.0.0.1 ping statistics --- 00:23:11.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.849 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2225114 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2225114 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2225114 ']' 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.849 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:11.849 [2024-12-06 13:30:57.745696] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:23:11.849 [2024-12-06 13:30:57.745762] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.849 [2024-12-06 13:30:57.843928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.849 [2024-12-06 13:30:57.895199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.849 [2024-12-06 13:30:57.895250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.849 [2024-12-06 13:30:57.895260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.849 [2024-12-06 13:30:57.895268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.849 [2024-12-06 13:30:57.895275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.849 [2024-12-06 13:30:57.896079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2225357 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=41a8b7b5-647c-45e2-bd59-55fc8dd88517 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=95615adc-c706-4a3f-b7ed-ad2a02a6abe8 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=677d7bc5-37e1-4417-89eb-1a3b63d57269 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:12.111 null0 00:23:12.111 null1 00:23:12.111 null2 00:23:12.111 [2024-12-06 13:30:58.655931] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.111 [2024-12-06 13:30:58.664368] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:23:12.111 [2024-12-06 13:30:58.664436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225357 ] 00:23:12.111 [2024-12-06 13:30:58.680200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2225357 /var/tmp/tgt2.sock 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2225357 ']' 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:12.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.111 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:12.111 [2024-12-06 13:30:58.756334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.373 [2024-12-06 13:30:58.808495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.635 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.635 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:12.635 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:12.896 [2024-12-06 13:30:59.377955] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.896 [2024-12-06 13:30:59.394136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:12.896 nvme0n1 nvme0n2 00:23:12.896 nvme1n1 00:23:12.896 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:12.896 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:12.896 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:14.281 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:14.281 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:14.282 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:14.282 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:14.282 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:14.282 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:14.282 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:14.282 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:14.282 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:14.282 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:14.282 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:14.282 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:14.282 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:15.226 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:15.226 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:15.226 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:15.226 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 41a8b7b5-647c-45e2-bd59-55fc8dd88517 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=41a8b7b5647c45e2bd5955fc8dd88517 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 41A8B7B5647C45E2BD5955FC8DD88517 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 41A8B7B5647C45E2BD5955FC8DD88517 == \4\1\A\8\B\7\B\5\6\4\7\C\4\5\E\2\B\D\5\9\5\5\F\C\8\D\D\8\8\5\1\7 ]] 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 95615adc-c706-4a3f-b7ed-ad2a02a6abe8 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:15.486 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=95615adcc7064a3fb7edad2a02a6abe8 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 95615ADCC7064A3FB7EDAD2A02A6ABE8 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 95615ADCC7064A3FB7EDAD2A02A6ABE8 == \9\5\6\1\5\A\D\C\C\7\0\6\4\A\3\F\B\7\E\D\A\D\2\A\0\2\A\6\A\B\E\8 ]] 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 677d7bc5-37e1-4417-89eb-1a3b63d57269 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=677d7bc537e1441789eb1a3b63d57269 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 677D7BC537E1441789EB1A3B63D57269 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 677D7BC537E1441789EB1A3B63D57269 == \6\7\7\D\7\B\C\5\3\7\E\1\4\4\1\7\8\9\E\B\1\A\3\B\6\3\D\5\7\2\6\9 ]] 00:23:15.486 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:15.746 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:15.746 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:15.746 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2225357 00:23:15.746 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2225357 ']' 00:23:15.746 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2225357 00:23:15.746 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:15.746 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.746 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225357 00:23:15.746 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:15.746 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:15.746 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225357' 00:23:15.746 killing process with pid 2225357 00:23:15.746 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2225357 00:23:15.746 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2225357 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:16.007 rmmod nvme_tcp 00:23:16.007 rmmod nvme_fabrics 00:23:16.007 rmmod nvme_keyring 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2225114 ']' 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2225114 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2225114 ']' 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2225114 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.007 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225114 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225114' 00:23:16.267 killing process with pid 2225114 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2225114 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2225114 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.267 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.815 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.815 00:23:18.815 real 0m14.957s 00:23:18.815 user 0m11.327s 00:23:18.815 sys 0m6.928s 00:23:18.815 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.815 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:18.815 ************************************ 00:23:18.815 END TEST nvmf_nsid 00:23:18.815 ************************************ 00:23:18.815 13:31:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:18.815 00:23:18.815 real 13m3.017s 00:23:18.815 user 27m21.672s 00:23:18.815 sys 3m53.227s 00:23:18.815 13:31:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.815 13:31:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:18.815 ************************************ 00:23:18.815 END TEST nvmf_target_extra 00:23:18.815 ************************************ 00:23:18.815 13:31:04 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:18.815 13:31:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.815 13:31:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.815 13:31:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.815 ************************************ 00:23:18.815 START TEST nvmf_host 00:23:18.815 ************************************ 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:18.815 * Looking for test storage... 00:23:18.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:18.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.815 --rc genhtml_branch_coverage=1 00:23:18.815 --rc genhtml_function_coverage=1 00:23:18.815 --rc genhtml_legend=1 00:23:18.815 --rc geninfo_all_blocks=1 00:23:18.815 --rc geninfo_unexecuted_blocks=1 00:23:18.815 00:23:18.815 ' 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:18.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.815 --rc genhtml_branch_coverage=1 00:23:18.815 --rc genhtml_function_coverage=1 00:23:18.815 --rc genhtml_legend=1 00:23:18.815 --rc geninfo_all_blocks=1 00:23:18.815 --rc geninfo_unexecuted_blocks=1 00:23:18.815 00:23:18.815 ' 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:18.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.815 --rc genhtml_branch_coverage=1 00:23:18.815 --rc genhtml_function_coverage=1 00:23:18.815 --rc genhtml_legend=1 00:23:18.815 --rc geninfo_all_blocks=1 00:23:18.815 --rc geninfo_unexecuted_blocks=1 00:23:18.815 00:23:18.815 ' 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:18.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.815 --rc genhtml_branch_coverage=1 00:23:18.815 --rc genhtml_function_coverage=1 00:23:18.815 --rc genhtml_legend=1 00:23:18.815 --rc geninfo_all_blocks=1 00:23:18.815 --rc geninfo_unexecuted_blocks=1 00:23:18.815 00:23:18.815 ' 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.815 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.816 ************************************ 00:23:18.816 START TEST nvmf_multicontroller 00:23:18.816 ************************************ 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:18.816 * Looking for test storage... 00:23:18.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:23:18.816 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:19.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.077 --rc genhtml_branch_coverage=1 00:23:19.077 --rc genhtml_function_coverage=1 00:23:19.077 --rc genhtml_legend=1 00:23:19.077 --rc geninfo_all_blocks=1 00:23:19.077 --rc geninfo_unexecuted_blocks=1 00:23:19.077 00:23:19.077 ' 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:19.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.077 --rc genhtml_branch_coverage=1 00:23:19.077 --rc genhtml_function_coverage=1 00:23:19.077 --rc genhtml_legend=1 00:23:19.077 --rc geninfo_all_blocks=1 00:23:19.077 --rc geninfo_unexecuted_blocks=1 00:23:19.077 00:23:19.077 ' 00:23:19.077 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:19.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.078 --rc genhtml_branch_coverage=1 00:23:19.078 --rc genhtml_function_coverage=1 00:23:19.078 --rc genhtml_legend=1 00:23:19.078 --rc geninfo_all_blocks=1 00:23:19.078 --rc geninfo_unexecuted_blocks=1 00:23:19.078 00:23:19.078 ' 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:19.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.078 --rc genhtml_branch_coverage=1 00:23:19.078 --rc genhtml_function_coverage=1 00:23:19.078 --rc genhtml_legend=1 00:23:19.078 --rc geninfo_all_blocks=1 00:23:19.078 --rc geninfo_unexecuted_blocks=1 00:23:19.078 00:23:19.078 ' 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:19.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:19.078 13:31:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:27.224 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:27.224 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:27.224 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:27.224 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.224 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.225 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:27.225 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:27.225 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.225 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.225 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.225 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.225 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:27.225 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.225 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.225 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.225 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:27.225 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:27.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:23:27.225 00:23:27.225 --- 10.0.0.2 ping statistics --- 00:23:27.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.225 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:23:27.225 13:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:23:27.225 00:23:27.225 --- 10.0.0.1 ping statistics --- 00:23:27.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.225 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2230456 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2230456 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2230456 ']' 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.225 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.225 [2024-12-06 13:31:13.114776] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:23:27.225 [2024-12-06 13:31:13.114846] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.225 [2024-12-06 13:31:13.216853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:27.225 [2024-12-06 13:31:13.269042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.225 [2024-12-06 13:31:13.269095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.225 [2024-12-06 13:31:13.269104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.225 [2024-12-06 13:31:13.269111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.225 [2024-12-06 13:31:13.269117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.225 [2024-12-06 13:31:13.271009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.225 [2024-12-06 13:31:13.271174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.225 [2024-12-06 13:31:13.271174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.485 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.485 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:27.485 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.485 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.485 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.485 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.485 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:27.485 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.485 13:31:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.485 [2024-12-06 13:31:13.996074] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.485 Malloc0 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.485 [2024-12-06 13:31:14.069941] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.485 [2024-12-06 13:31:14.081792] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:27.485 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.486 Malloc1 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.486 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2230701 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2230701 /var/tmp/bdevperf.sock 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2230701 ']' 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.746 13:31:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.494 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.494 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:28.494 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:28.494 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.494 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.494 NVMe0n1 00:23:28.494 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.494 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:28.494 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.494 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:28.494 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.773 1 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.773 request: 00:23:28.773 { 00:23:28.773 "name": "NVMe0", 00:23:28.773 "trtype": "tcp", 00:23:28.773 "traddr": "10.0.0.2", 00:23:28.773 "adrfam": "ipv4", 00:23:28.773 "trsvcid": "4420", 00:23:28.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.773 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:28.773 "hostaddr": "10.0.0.1", 00:23:28.773 "prchk_reftag": false, 00:23:28.773 "prchk_guard": false, 00:23:28.773 "hdgst": false, 00:23:28.773 "ddgst": false, 00:23:28.773 "allow_unrecognized_csi": false, 00:23:28.773 "method": "bdev_nvme_attach_controller", 00:23:28.773 "req_id": 1 00:23:28.773 } 00:23:28.773 Got JSON-RPC error response 00:23:28.773 response: 00:23:28.773 { 00:23:28.773 "code": -114, 00:23:28.773 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:28.773 } 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.773 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.773 request: 00:23:28.773 { 00:23:28.773 "name": "NVMe0", 00:23:28.773 "trtype": "tcp", 00:23:28.773 "traddr": "10.0.0.2", 00:23:28.773 "adrfam": "ipv4", 00:23:28.773 "trsvcid": "4420", 00:23:28.774 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:28.774 "hostaddr": "10.0.0.1", 00:23:28.774 "prchk_reftag": false, 00:23:28.774 "prchk_guard": false, 00:23:28.774 "hdgst": false, 00:23:28.774 "ddgst": false, 00:23:28.774 "allow_unrecognized_csi": false, 00:23:28.774 "method": "bdev_nvme_attach_controller", 00:23:28.774 "req_id": 1 00:23:28.774 } 00:23:28.774 Got JSON-RPC error response 00:23:28.774 response: 00:23:28.774 { 00:23:28.774 "code": -114, 00:23:28.774 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:28.774 } 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.774 request: 00:23:28.774 { 00:23:28.774 "name": "NVMe0", 00:23:28.774 "trtype": "tcp", 00:23:28.774 "traddr": "10.0.0.2", 00:23:28.774 "adrfam": "ipv4", 00:23:28.774 "trsvcid": "4420", 00:23:28.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.774 "hostaddr": "10.0.0.1", 00:23:28.774 "prchk_reftag": false, 00:23:28.774 "prchk_guard": false, 00:23:28.774 "hdgst": false, 00:23:28.774 "ddgst": false, 00:23:28.774 "multipath": "disable", 00:23:28.774 "allow_unrecognized_csi": false, 00:23:28.774 "method": "bdev_nvme_attach_controller", 00:23:28.774 "req_id": 1 00:23:28.774 } 00:23:28.774 Got JSON-RPC error response 00:23:28.774 response: 00:23:28.774 { 00:23:28.774 "code": -114, 00:23:28.774 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:28.774 } 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.774 request: 00:23:28.774 { 00:23:28.774 "name": "NVMe0", 00:23:28.774 "trtype": "tcp", 00:23:28.774 "traddr": "10.0.0.2", 00:23:28.774 "adrfam": "ipv4", 00:23:28.774 "trsvcid": "4420", 00:23:28.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.774 "hostaddr": "10.0.0.1", 00:23:28.774 "prchk_reftag": false, 00:23:28.774 "prchk_guard": false, 00:23:28.774 "hdgst": false, 00:23:28.774 "ddgst": false, 00:23:28.774 "multipath": "failover", 00:23:28.774 "allow_unrecognized_csi": false, 00:23:28.774 "method": "bdev_nvme_attach_controller", 00:23:28.774 "req_id": 1 00:23:28.774 } 00:23:28.774 Got JSON-RPC error response 00:23:28.774 response: 00:23:28.774 { 00:23:28.774 "code": -114, 00:23:28.774 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:28.774 } 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.774 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.034 NVMe0n1 00:23:29.034 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.034 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:29.034 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.034 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.034 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.034 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:29.034 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.034 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.294 00:23:29.294 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.294 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:29.294 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:29.294 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.294 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:29.294 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.294 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:29.294 13:31:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:30.234 { 00:23:30.234 "results": [ 00:23:30.234 { 00:23:30.234 "job": "NVMe0n1", 00:23:30.234 "core_mask": "0x1", 00:23:30.234 "workload": "write", 00:23:30.234 "status": "finished", 00:23:30.234 "queue_depth": 128, 00:23:30.234 "io_size": 4096, 00:23:30.234 "runtime": 1.006099, 00:23:30.234 "iops": 27733.851241279437, 00:23:30.234 "mibps": 108.3353564112478, 00:23:30.234 "io_failed": 0, 00:23:30.234 "io_timeout": 0, 00:23:30.234 "avg_latency_us": 4601.203181497807, 00:23:30.234 "min_latency_us": 2348.3733333333334, 00:23:30.234 "max_latency_us": 6881.28 00:23:30.234 } 00:23:30.234 ], 00:23:30.234 "core_count": 1 00:23:30.234 } 00:23:30.234 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:30.234 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.234 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.234 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.234 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:30.234 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2230701 00:23:30.234 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2230701 ']' 00:23:30.234 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2230701 00:23:30.234 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:30.234 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.234 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2230701 00:23:30.494 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:30.494 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:30.494 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2230701' 00:23:30.494 killing process with pid 2230701 00:23:30.494 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2230701 00:23:30.494 13:31:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2230701 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:30.494 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:30.494 [2024-12-06 13:31:14.213262] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:23:30.494 [2024-12-06 13:31:14.213331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230701 ] 00:23:30.494 [2024-12-06 13:31:14.307345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.494 [2024-12-06 13:31:14.360836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.494 [2024-12-06 13:31:15.700373] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name f90b111e-9bdf-47ae-87d8-b5c680176fe8 already exists 00:23:30.494 [2024-12-06 13:31:15.700402] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:f90b111e-9bdf-47ae-87d8-b5c680176fe8 alias for bdev NVMe1n1 00:23:30.494 [2024-12-06 13:31:15.700412] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:30.494 Running I/O for 1 seconds... 00:23:30.494 27727.00 IOPS, 108.31 MiB/s 00:23:30.494 Latency(us) 00:23:30.494 [2024-12-06T12:31:17.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.494 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:30.494 NVMe0n1 : 1.01 27733.85 108.34 0.00 0.00 4601.20 2348.37 6881.28 00:23:30.494 [2024-12-06T12:31:17.153Z] =================================================================================================================== 00:23:30.494 [2024-12-06T12:31:17.153Z] Total : 27733.85 108.34 0.00 0.00 4601.20 2348.37 6881.28 00:23:30.494 Received shutdown signal, test time was about 1.000000 seconds 00:23:30.494 00:23:30.494 Latency(us) 00:23:30.494 [2024-12-06T12:31:17.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.494 [2024-12-06T12:31:17.153Z] =================================================================================================================== 00:23:30.494 [2024-12-06T12:31:17.153Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.494 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.494 rmmod nvme_tcp 00:23:30.494 rmmod nvme_fabrics 00:23:30.494 rmmod nvme_keyring 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2230456 ']' 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2230456 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2230456 ']' 00:23:30.494 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2230456 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2230456 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2230456' 00:23:30.755 killing process with pid 2230456 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2230456 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2230456 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.755 13:31:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.297 00:23:33.297 real 0m14.137s 00:23:33.297 user 0m17.645s 00:23:33.297 sys 0m6.550s 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.297 ************************************ 00:23:33.297 END TEST nvmf_multicontroller 00:23:33.297 ************************************ 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.297 ************************************ 00:23:33.297 START TEST nvmf_aer 00:23:33.297 ************************************ 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:33.297 * Looking for test storage... 00:23:33.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:33.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.297 --rc genhtml_branch_coverage=1 00:23:33.297 --rc genhtml_function_coverage=1 00:23:33.297 --rc genhtml_legend=1 00:23:33.297 --rc geninfo_all_blocks=1 00:23:33.297 --rc geninfo_unexecuted_blocks=1 00:23:33.297 00:23:33.297 ' 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:33.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.297 --rc genhtml_branch_coverage=1 00:23:33.297 --rc genhtml_function_coverage=1 00:23:33.297 --rc genhtml_legend=1 00:23:33.297 --rc geninfo_all_blocks=1 00:23:33.297 --rc geninfo_unexecuted_blocks=1 00:23:33.297 00:23:33.297 ' 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:33.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.297 --rc genhtml_branch_coverage=1 00:23:33.297 --rc genhtml_function_coverage=1 00:23:33.297 --rc genhtml_legend=1 00:23:33.297 --rc geninfo_all_blocks=1 00:23:33.297 --rc geninfo_unexecuted_blocks=1 00:23:33.297 00:23:33.297 ' 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:33.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.297 --rc genhtml_branch_coverage=1 00:23:33.297 --rc genhtml_function_coverage=1 00:23:33.297 --rc genhtml_legend=1 00:23:33.297 --rc geninfo_all_blocks=1 00:23:33.297 --rc geninfo_unexecuted_blocks=1 00:23:33.297 00:23:33.297 ' 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.297 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.298 13:31:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:41.437 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:41.437 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:41.437 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:41.437 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.437 13:31:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:41.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:23:41.437 00:23:41.437 --- 10.0.0.2 ping statistics --- 00:23:41.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.437 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:23:41.437 00:23:41.437 --- 10.0.0.1 ping statistics --- 00:23:41.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.437 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:41.437 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2235496 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2235496 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2235496 ']' 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.438 13:31:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.438 [2024-12-06 13:31:27.313356] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:23:41.438 [2024-12-06 13:31:27.313423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.438 [2024-12-06 13:31:27.414195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:41.438 [2024-12-06 13:31:27.467494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.438 [2024-12-06 13:31:27.467549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.438 [2024-12-06 13:31:27.467558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.438 [2024-12-06 13:31:27.467565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.438 [2024-12-06 13:31:27.467571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.438 [2024-12-06 13:31:27.469646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.438 [2024-12-06 13:31:27.469804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.438 [2024-12-06 13:31:27.469965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.438 [2024-12-06 13:31:27.469966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.699 [2024-12-06 13:31:28.183335] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.699 Malloc0 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.699 [2024-12-06 13:31:28.262546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.699 [ 00:23:41.699 { 00:23:41.699 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:41.699 "subtype": "Discovery", 00:23:41.699 "listen_addresses": [], 00:23:41.699 "allow_any_host": true, 00:23:41.699 "hosts": [] 00:23:41.699 }, 00:23:41.699 { 00:23:41.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.699 "subtype": "NVMe", 00:23:41.699 "listen_addresses": [ 00:23:41.699 { 00:23:41.699 "trtype": "TCP", 00:23:41.699 "adrfam": "IPv4", 00:23:41.699 "traddr": "10.0.0.2", 00:23:41.699 "trsvcid": "4420" 00:23:41.699 } 00:23:41.699 ], 00:23:41.699 "allow_any_host": true, 00:23:41.699 "hosts": [], 00:23:41.699 "serial_number": "SPDK00000000000001", 00:23:41.699 "model_number": "SPDK bdev Controller", 00:23:41.699 "max_namespaces": 2, 00:23:41.699 "min_cntlid": 1, 00:23:41.699 "max_cntlid": 65519, 00:23:41.699 "namespaces": [ 00:23:41.699 { 00:23:41.699 "nsid": 1, 00:23:41.699 "bdev_name": "Malloc0", 00:23:41.699 "name": "Malloc0", 00:23:41.699 "nguid": "08608FA95B9A41BBB9E22D969F442FF6", 00:23:41.699 "uuid": "08608fa9-5b9a-41bb-b9e2-2d969f442ff6" 00:23:41.699 } 00:23:41.699 ] 00:23:41.699 } 00:23:41.699 ] 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2235659 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:41.699 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.961 Malloc1 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.961 Asynchronous Event Request test 00:23:41.961 Attaching to 10.0.0.2 00:23:41.961 Attached to 10.0.0.2 00:23:41.961 Registering asynchronous event callbacks... 00:23:41.961 Starting namespace attribute notice tests for all controllers... 00:23:41.961 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:41.961 aer_cb - Changed Namespace 00:23:41.961 Cleaning up... 00:23:41.961 [ 00:23:41.961 { 00:23:41.961 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:41.961 "subtype": "Discovery", 00:23:41.961 "listen_addresses": [], 00:23:41.961 "allow_any_host": true, 00:23:41.961 "hosts": [] 00:23:41.961 }, 00:23:41.961 { 00:23:41.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.961 "subtype": "NVMe", 00:23:41.961 "listen_addresses": [ 00:23:41.961 { 00:23:41.961 "trtype": "TCP", 00:23:41.961 "adrfam": "IPv4", 00:23:41.961 "traddr": "10.0.0.2", 00:23:41.961 "trsvcid": "4420" 00:23:41.961 } 00:23:41.961 ], 00:23:41.961 "allow_any_host": true, 00:23:41.961 "hosts": [], 00:23:41.961 "serial_number": "SPDK00000000000001", 00:23:41.961 "model_number": "SPDK bdev Controller", 00:23:41.961 "max_namespaces": 2, 00:23:41.961 "min_cntlid": 1, 00:23:41.961 "max_cntlid": 65519, 00:23:41.961 "namespaces": [ 00:23:41.961 { 00:23:41.961 "nsid": 1, 00:23:41.961 "bdev_name": "Malloc0", 00:23:41.961 "name": "Malloc0", 00:23:41.961 "nguid": "08608FA95B9A41BBB9E22D969F442FF6", 00:23:41.961 "uuid": "08608fa9-5b9a-41bb-b9e2-2d969f442ff6" 00:23:41.961 }, 00:23:41.961 { 00:23:41.961 "nsid": 2, 00:23:41.961 "bdev_name": "Malloc1", 00:23:41.961 "name": "Malloc1", 00:23:41.961 "nguid": "77DC986AEBC44D1E9C6566C75607BE7B", 00:23:41.961 "uuid": "77dc986a-ebc4-4d1e-9c65-66c75607be7b" 00:23:41.961 } 00:23:41.961 ] 00:23:41.961 } 00:23:41.961 ] 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2235659 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.961 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:42.222 rmmod nvme_tcp 00:23:42.222 rmmod nvme_fabrics 00:23:42.222 rmmod nvme_keyring 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2235496 ']' 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2235496 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2235496 ']' 00:23:42.222 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2235496 00:23:42.223 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:42.223 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.223 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2235496 00:23:42.223 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:42.223 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:42.223 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2235496' 00:23:42.223 killing process with pid 2235496 00:23:42.223 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2235496 00:23:42.223 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2235496 00:23:42.484 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:42.484 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:42.484 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:42.484 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:42.484 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:42.484 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:42.484 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:42.484 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:42.484 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:42.484 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.484 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.484 13:31:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.396 13:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:44.396 00:23:44.396 real 0m11.536s 00:23:44.396 user 0m8.214s 00:23:44.396 sys 0m6.185s 00:23:44.396 13:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.396 13:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.396 ************************************ 00:23:44.396 END TEST nvmf_aer 00:23:44.396 ************************************ 00:23:44.657 13:31:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:44.657 13:31:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:44.657 13:31:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.657 13:31:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.657 ************************************ 00:23:44.657 START TEST nvmf_async_init 00:23:44.657 ************************************ 00:23:44.657 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:44.657 * Looking for test storage... 00:23:44.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:44.657 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:44.657 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:44.657 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:44.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.919 --rc genhtml_branch_coverage=1 00:23:44.919 --rc genhtml_function_coverage=1 00:23:44.919 --rc genhtml_legend=1 00:23:44.919 --rc geninfo_all_blocks=1 00:23:44.919 --rc geninfo_unexecuted_blocks=1 00:23:44.919 00:23:44.919 ' 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:44.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.919 --rc genhtml_branch_coverage=1 00:23:44.919 --rc genhtml_function_coverage=1 00:23:44.919 --rc genhtml_legend=1 00:23:44.919 --rc geninfo_all_blocks=1 00:23:44.919 --rc geninfo_unexecuted_blocks=1 00:23:44.919 00:23:44.919 ' 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:44.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.919 --rc genhtml_branch_coverage=1 00:23:44.919 --rc genhtml_function_coverage=1 00:23:44.919 --rc genhtml_legend=1 00:23:44.919 --rc geninfo_all_blocks=1 00:23:44.919 --rc geninfo_unexecuted_blocks=1 00:23:44.919 00:23:44.919 ' 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:44.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.919 --rc genhtml_branch_coverage=1 00:23:44.919 --rc genhtml_function_coverage=1 00:23:44.919 --rc genhtml_legend=1 00:23:44.919 --rc geninfo_all_blocks=1 00:23:44.919 --rc geninfo_unexecuted_blocks=1 00:23:44.919 00:23:44.919 ' 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.919 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=07ea709b0efe4e2d8f3155267743f500 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.920 13:31:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:53.053 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:53.054 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:53.054 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:53.054 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:53.054 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:53.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:23:53.054 00:23:53.054 --- 10.0.0.2 ping statistics --- 00:23:53.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.054 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:23:53.054 00:23:53.054 --- 10.0.0.1 ping statistics --- 00:23:53.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.054 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2239879 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2239879 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2239879 ']' 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.054 13:31:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.054 [2024-12-06 13:31:38.956451] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:23:53.054 [2024-12-06 13:31:38.956525] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.054 [2024-12-06 13:31:39.054728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.054 [2024-12-06 13:31:39.105740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.054 [2024-12-06 13:31:39.105790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.054 [2024-12-06 13:31:39.105798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.054 [2024-12-06 13:31:39.105806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.054 [2024-12-06 13:31:39.105812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.054 [2024-12-06 13:31:39.106573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.314 [2024-12-06 13:31:39.818499] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.314 null0 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 07ea709b0efe4e2d8f3155267743f500 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.314 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.315 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.315 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:53.315 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.315 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.315 [2024-12-06 13:31:39.878860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.315 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.315 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:53.315 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.315 13:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.575 nvme0n1 00:23:53.575 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.575 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:53.575 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.575 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.575 [ 00:23:53.575 { 00:23:53.575 "name": "nvme0n1", 00:23:53.575 "aliases": [ 00:23:53.575 "07ea709b-0efe-4e2d-8f31-55267743f500" 00:23:53.575 ], 00:23:53.575 "product_name": "NVMe disk", 00:23:53.575 "block_size": 512, 00:23:53.575 "num_blocks": 2097152, 00:23:53.575 "uuid": "07ea709b-0efe-4e2d-8f31-55267743f500", 00:23:53.575 "numa_id": 0, 00:23:53.575 "assigned_rate_limits": { 00:23:53.575 "rw_ios_per_sec": 0, 00:23:53.575 "rw_mbytes_per_sec": 0, 00:23:53.575 "r_mbytes_per_sec": 0, 00:23:53.575 "w_mbytes_per_sec": 0 00:23:53.575 }, 00:23:53.575 "claimed": false, 00:23:53.575 "zoned": false, 00:23:53.575 "supported_io_types": { 00:23:53.575 "read": true, 00:23:53.575 "write": true, 00:23:53.575 "unmap": false, 00:23:53.575 "flush": true, 00:23:53.575 "reset": true, 00:23:53.575 "nvme_admin": true, 00:23:53.575 "nvme_io": true, 00:23:53.575 "nvme_io_md": false, 00:23:53.575 "write_zeroes": true, 00:23:53.575 "zcopy": false, 00:23:53.575 "get_zone_info": false, 00:23:53.575 "zone_management": false, 00:23:53.575 "zone_append": false, 00:23:53.575 "compare": true, 00:23:53.575 "compare_and_write": true, 00:23:53.575 "abort": true, 00:23:53.575 "seek_hole": false, 00:23:53.575 "seek_data": false, 00:23:53.575 "copy": true, 00:23:53.575 "nvme_iov_md": false 00:23:53.575 }, 00:23:53.575 "memory_domains": [ 00:23:53.575 { 00:23:53.575 "dma_device_id": "system", 00:23:53.575 "dma_device_type": 1 00:23:53.575 } 00:23:53.575 ], 00:23:53.575 "driver_specific": { 00:23:53.575 "nvme": [ 00:23:53.575 { 00:23:53.575 "trid": { 00:23:53.575 "trtype": "TCP", 00:23:53.575 "adrfam": "IPv4", 00:23:53.575 "traddr": "10.0.0.2", 00:23:53.575 "trsvcid": "4420", 00:23:53.575 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:53.575 }, 00:23:53.575 "ctrlr_data": { 00:23:53.575 "cntlid": 1, 00:23:53.575 "vendor_id": "0x8086", 00:23:53.575 "model_number": "SPDK bdev Controller", 00:23:53.575 "serial_number": "00000000000000000000", 00:23:53.575 "firmware_revision": "25.01", 00:23:53.575 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:53.575 "oacs": { 00:23:53.575 "security": 0, 00:23:53.575 "format": 0, 00:23:53.575 "firmware": 0, 00:23:53.575 "ns_manage": 0 00:23:53.575 }, 00:23:53.575 "multi_ctrlr": true, 00:23:53.575 "ana_reporting": false 00:23:53.575 }, 00:23:53.575 "vs": { 00:23:53.575 "nvme_version": "1.3" 00:23:53.575 }, 00:23:53.575 "ns_data": { 00:23:53.575 "id": 1, 00:23:53.575 "can_share": true 00:23:53.575 } 00:23:53.575 } 00:23:53.575 ], 00:23:53.575 "mp_policy": "active_passive" 00:23:53.575 } 00:23:53.575 } 00:23:53.575 ] 00:23:53.575 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.575 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:53.575 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.575 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.575 [2024-12-06 13:31:40.153889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:53.575 [2024-12-06 13:31:40.153979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251b880 (9): Bad file descriptor 00:23:53.835 [2024-12-06 13:31:40.285566] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.835 [ 00:23:53.835 { 00:23:53.835 "name": "nvme0n1", 00:23:53.835 "aliases": [ 00:23:53.835 "07ea709b-0efe-4e2d-8f31-55267743f500" 00:23:53.835 ], 00:23:53.835 "product_name": "NVMe disk", 00:23:53.835 "block_size": 512, 00:23:53.835 "num_blocks": 2097152, 00:23:53.835 "uuid": "07ea709b-0efe-4e2d-8f31-55267743f500", 00:23:53.835 "numa_id": 0, 00:23:53.835 "assigned_rate_limits": { 00:23:53.835 "rw_ios_per_sec": 0, 00:23:53.835 "rw_mbytes_per_sec": 0, 00:23:53.835 "r_mbytes_per_sec": 0, 00:23:53.835 "w_mbytes_per_sec": 0 00:23:53.835 }, 00:23:53.835 "claimed": false, 00:23:53.835 "zoned": false, 00:23:53.835 "supported_io_types": { 00:23:53.835 "read": true, 00:23:53.835 "write": true, 00:23:53.835 "unmap": false, 00:23:53.835 "flush": true, 00:23:53.835 "reset": true, 00:23:53.835 "nvme_admin": true, 00:23:53.835 "nvme_io": true, 00:23:53.835 "nvme_io_md": false, 00:23:53.835 "write_zeroes": true, 00:23:53.835 "zcopy": false, 00:23:53.835 "get_zone_info": false, 00:23:53.835 "zone_management": false, 00:23:53.835 "zone_append": false, 00:23:53.835 "compare": true, 00:23:53.835 "compare_and_write": true, 00:23:53.835 "abort": true, 00:23:53.835 "seek_hole": false, 00:23:53.835 "seek_data": false, 00:23:53.835 "copy": true, 00:23:53.835 "nvme_iov_md": false 00:23:53.835 }, 00:23:53.835 "memory_domains": [ 00:23:53.835 { 00:23:53.835 "dma_device_id": "system", 00:23:53.835 "dma_device_type": 1 00:23:53.835 } 00:23:53.835 ], 00:23:53.835 "driver_specific": { 00:23:53.835 "nvme": [ 00:23:53.835 { 00:23:53.835 "trid": { 00:23:53.835 "trtype": "TCP", 00:23:53.835 "adrfam": "IPv4", 00:23:53.835 "traddr": "10.0.0.2", 00:23:53.835 "trsvcid": "4420", 00:23:53.835 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:53.835 }, 00:23:53.835 "ctrlr_data": { 00:23:53.835 "cntlid": 2, 00:23:53.835 "vendor_id": "0x8086", 00:23:53.835 "model_number": "SPDK bdev Controller", 00:23:53.835 "serial_number": "00000000000000000000", 00:23:53.835 "firmware_revision": "25.01", 00:23:53.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:53.835 "oacs": { 00:23:53.835 "security": 0, 00:23:53.835 "format": 0, 00:23:53.835 "firmware": 0, 00:23:53.835 "ns_manage": 0 00:23:53.835 }, 00:23:53.835 "multi_ctrlr": true, 00:23:53.835 "ana_reporting": false 00:23:53.835 }, 00:23:53.835 "vs": { 00:23:53.835 "nvme_version": "1.3" 00:23:53.835 }, 00:23:53.835 "ns_data": { 00:23:53.835 "id": 1, 00:23:53.835 "can_share": true 00:23:53.835 } 00:23:53.835 } 00:23:53.835 ], 00:23:53.835 "mp_policy": "active_passive" 00:23:53.835 } 00:23:53.835 } 00:23:53.835 ] 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.iktM8Sxfsz 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.iktM8Sxfsz 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.iktM8Sxfsz 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.835 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.836 [2024-12-06 13:31:40.374576] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.836 [2024-12-06 13:31:40.374763] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.836 [2024-12-06 13:31:40.398648] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.836 nvme0n1 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:53.836 [ 00:23:53.836 { 00:23:53.836 "name": "nvme0n1", 00:23:53.836 "aliases": [ 00:23:53.836 "07ea709b-0efe-4e2d-8f31-55267743f500" 00:23:53.836 ], 00:23:53.836 "product_name": "NVMe disk", 00:23:53.836 "block_size": 512, 00:23:53.836 "num_blocks": 2097152, 00:23:53.836 "uuid": "07ea709b-0efe-4e2d-8f31-55267743f500", 00:23:53.836 "numa_id": 0, 00:23:53.836 "assigned_rate_limits": { 00:23:53.836 "rw_ios_per_sec": 0, 00:23:53.836 "rw_mbytes_per_sec": 0, 00:23:53.836 "r_mbytes_per_sec": 0, 00:23:53.836 "w_mbytes_per_sec": 0 00:23:53.836 }, 00:23:53.836 "claimed": false, 00:23:53.836 "zoned": false, 00:23:53.836 "supported_io_types": { 00:23:53.836 "read": true, 00:23:53.836 "write": true, 00:23:53.836 "unmap": false, 00:23:53.836 "flush": true, 00:23:53.836 "reset": true, 00:23:53.836 "nvme_admin": true, 00:23:53.836 "nvme_io": true, 00:23:53.836 "nvme_io_md": false, 00:23:53.836 "write_zeroes": true, 00:23:53.836 "zcopy": false, 00:23:53.836 "get_zone_info": false, 00:23:53.836 "zone_management": false, 00:23:53.836 "zone_append": false, 00:23:53.836 "compare": true, 00:23:53.836 "compare_and_write": true, 00:23:53.836 "abort": true, 00:23:53.836 "seek_hole": false, 00:23:53.836 "seek_data": false, 00:23:53.836 "copy": true, 00:23:53.836 "nvme_iov_md": false 00:23:53.836 }, 00:23:53.836 "memory_domains": [ 00:23:53.836 { 00:23:53.836 "dma_device_id": "system", 00:23:53.836 "dma_device_type": 1 00:23:53.836 } 00:23:53.836 ], 00:23:53.836 "driver_specific": { 00:23:53.836 "nvme": [ 00:23:53.836 { 00:23:53.836 "trid": { 00:23:53.836 "trtype": "TCP", 00:23:53.836 "adrfam": "IPv4", 00:23:53.836 "traddr": "10.0.0.2", 00:23:53.836 "trsvcid": "4421", 00:23:53.836 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:53.836 }, 00:23:53.836 "ctrlr_data": { 00:23:53.836 "cntlid": 3, 00:23:53.836 "vendor_id": "0x8086", 00:23:53.836 "model_number": "SPDK bdev Controller", 00:23:53.836 "serial_number": "00000000000000000000", 00:23:53.836 "firmware_revision": "25.01", 00:23:53.836 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:53.836 "oacs": { 00:23:53.836 "security": 0, 00:23:53.836 "format": 0, 00:23:53.836 "firmware": 0, 00:23:53.836 "ns_manage": 0 00:23:53.836 }, 00:23:53.836 "multi_ctrlr": true, 00:23:53.836 "ana_reporting": false 00:23:53.836 }, 00:23:53.836 "vs": { 00:23:53.836 "nvme_version": "1.3" 00:23:53.836 }, 00:23:53.836 "ns_data": { 00:23:53.836 "id": 1, 00:23:53.836 "can_share": true 00:23:53.836 } 00:23:53.836 } 00:23:53.836 ], 00:23:53.836 "mp_policy": "active_passive" 00:23:53.836 } 00:23:53.836 } 00:23:53.836 ] 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.836 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:54.095 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.iktM8Sxfsz 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.096 rmmod nvme_tcp 00:23:54.096 rmmod nvme_fabrics 00:23:54.096 rmmod nvme_keyring 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2239879 ']' 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2239879 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2239879 ']' 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2239879 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2239879 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2239879' 00:23:54.096 killing process with pid 2239879 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2239879 00:23:54.096 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2239879 00:23:54.356 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:54.356 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:54.356 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:54.356 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:54.356 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:54.356 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:54.356 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:54.356 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.356 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:54.356 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.356 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.356 13:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.266 13:31:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:56.266 00:23:56.266 real 0m11.766s 00:23:56.266 user 0m4.198s 00:23:56.266 sys 0m6.163s 00:23:56.266 13:31:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.266 13:31:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:56.266 ************************************ 00:23:56.266 END TEST nvmf_async_init 00:23:56.266 ************************************ 00:23:56.526 13:31:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:56.526 13:31:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:56.526 13:31:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:56.526 13:31:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.526 ************************************ 00:23:56.526 START TEST dma 00:23:56.526 ************************************ 00:23:56.526 13:31:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:56.526 * Looking for test storage... 00:23:56.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.526 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:56.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.787 --rc genhtml_branch_coverage=1 00:23:56.787 --rc genhtml_function_coverage=1 00:23:56.787 --rc genhtml_legend=1 00:23:56.787 --rc geninfo_all_blocks=1 00:23:56.787 --rc geninfo_unexecuted_blocks=1 00:23:56.787 00:23:56.787 ' 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:56.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.787 --rc genhtml_branch_coverage=1 00:23:56.787 --rc genhtml_function_coverage=1 00:23:56.787 --rc genhtml_legend=1 00:23:56.787 --rc geninfo_all_blocks=1 00:23:56.787 --rc geninfo_unexecuted_blocks=1 00:23:56.787 00:23:56.787 ' 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:56.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.787 --rc genhtml_branch_coverage=1 00:23:56.787 --rc genhtml_function_coverage=1 00:23:56.787 --rc genhtml_legend=1 00:23:56.787 --rc geninfo_all_blocks=1 00:23:56.787 --rc geninfo_unexecuted_blocks=1 00:23:56.787 00:23:56.787 ' 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:56.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.787 --rc genhtml_branch_coverage=1 00:23:56.787 --rc genhtml_function_coverage=1 00:23:56.787 --rc genhtml_legend=1 00:23:56.787 --rc geninfo_all_blocks=1 00:23:56.787 --rc geninfo_unexecuted_blocks=1 00:23:56.787 00:23:56.787 ' 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:56.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:56.787 00:23:56.787 real 0m0.241s 00:23:56.787 user 0m0.135s 00:23:56.787 sys 0m0.122s 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:56.787 ************************************ 00:23:56.787 END TEST dma 00:23:56.787 ************************************ 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.787 ************************************ 00:23:56.787 START TEST nvmf_identify 00:23:56.787 ************************************ 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:56.787 * Looking for test storage... 00:23:56.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:56.787 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:57.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.047 --rc genhtml_branch_coverage=1 00:23:57.047 --rc genhtml_function_coverage=1 00:23:57.047 --rc genhtml_legend=1 00:23:57.047 --rc geninfo_all_blocks=1 00:23:57.047 --rc geninfo_unexecuted_blocks=1 00:23:57.047 00:23:57.047 ' 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:57.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.047 --rc genhtml_branch_coverage=1 00:23:57.047 --rc genhtml_function_coverage=1 00:23:57.047 --rc genhtml_legend=1 00:23:57.047 --rc geninfo_all_blocks=1 00:23:57.047 --rc geninfo_unexecuted_blocks=1 00:23:57.047 00:23:57.047 ' 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:57.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.047 --rc genhtml_branch_coverage=1 00:23:57.047 --rc genhtml_function_coverage=1 00:23:57.047 --rc genhtml_legend=1 00:23:57.047 --rc geninfo_all_blocks=1 00:23:57.047 --rc geninfo_unexecuted_blocks=1 00:23:57.047 00:23:57.047 ' 00:23:57.047 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:57.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.047 --rc genhtml_branch_coverage=1 00:23:57.047 --rc genhtml_function_coverage=1 00:23:57.047 --rc genhtml_legend=1 00:23:57.047 --rc geninfo_all_blocks=1 00:23:57.047 --rc geninfo_unexecuted_blocks=1 00:23:57.047 00:23:57.047 ' 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:57.048 13:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.197 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.197 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:05.197 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:05.197 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:05.197 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:05.197 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:05.198 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:05.198 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:05.198 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:05.198 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:05.198 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.199 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.199 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:05.199 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:05.199 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.199 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.199 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.199 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.199 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:05.199 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.199 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.199 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.199 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:05.199 13:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:05.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:24:05.199 00:24:05.199 --- 10.0.0.2 ping statistics --- 00:24:05.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.199 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:24:05.199 00:24:05.199 --- 10.0.0.1 ping statistics --- 00:24:05.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.199 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2244592 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2244592 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2244592 ']' 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.199 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.199 [2024-12-06 13:31:51.116795] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:24:05.199 [2024-12-06 13:31:51.116862] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.199 [2024-12-06 13:31:51.217632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.199 [2024-12-06 13:31:51.271529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.199 [2024-12-06 13:31:51.271586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.199 [2024-12-06 13:31:51.271595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.199 [2024-12-06 13:31:51.271602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.199 [2024-12-06 13:31:51.271609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.199 [2024-12-06 13:31:51.274068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.199 [2024-12-06 13:31:51.274228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.199 [2024-12-06 13:31:51.274355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.199 [2024-12-06 13:31:51.274355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.461 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.461 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:05.461 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:05.461 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.461 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.461 [2024-12-06 13:31:51.952146] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.461 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.461 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:05.461 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.461 13:31:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.461 Malloc0 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.461 [2024-12-06 13:31:52.075367] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.461 [ 00:24:05.461 { 00:24:05.461 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:05.461 "subtype": "Discovery", 00:24:05.461 "listen_addresses": [ 00:24:05.461 { 00:24:05.461 "trtype": "TCP", 00:24:05.461 "adrfam": "IPv4", 00:24:05.461 "traddr": "10.0.0.2", 00:24:05.461 "trsvcid": "4420" 00:24:05.461 } 00:24:05.461 ], 00:24:05.461 "allow_any_host": true, 00:24:05.461 "hosts": [] 00:24:05.461 }, 00:24:05.461 { 00:24:05.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.461 "subtype": "NVMe", 00:24:05.461 "listen_addresses": [ 00:24:05.461 { 00:24:05.461 "trtype": "TCP", 00:24:05.461 "adrfam": "IPv4", 00:24:05.461 "traddr": "10.0.0.2", 00:24:05.461 "trsvcid": "4420" 00:24:05.461 } 00:24:05.461 ], 00:24:05.461 "allow_any_host": true, 00:24:05.461 "hosts": [], 00:24:05.461 "serial_number": "SPDK00000000000001", 00:24:05.461 "model_number": "SPDK bdev Controller", 00:24:05.461 "max_namespaces": 32, 00:24:05.461 "min_cntlid": 1, 00:24:05.461 "max_cntlid": 65519, 00:24:05.461 "namespaces": [ 00:24:05.461 { 00:24:05.461 "nsid": 1, 00:24:05.461 "bdev_name": "Malloc0", 00:24:05.461 "name": "Malloc0", 00:24:05.461 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:05.461 "eui64": "ABCDEF0123456789", 00:24:05.461 "uuid": "95493963-69a1-432f-a7ec-847506a04fad" 00:24:05.461 } 00:24:05.461 ] 00:24:05.461 } 00:24:05.461 ] 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.461 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:05.725 [2024-12-06 13:31:52.140579] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:24:05.725 [2024-12-06 13:31:52.140627] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244853 ] 00:24:05.725 [2024-12-06 13:31:52.196235] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:05.725 [2024-12-06 13:31:52.196315] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:05.725 [2024-12-06 13:31:52.196321] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:05.725 [2024-12-06 13:31:52.196341] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:05.725 [2024-12-06 13:31:52.196352] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:05.725 [2024-12-06 13:31:52.199898] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:05.725 [2024-12-06 13:31:52.199963] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x91e690 0 00:24:05.725 [2024-12-06 13:31:52.200189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:05.725 [2024-12-06 13:31:52.200199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:05.725 [2024-12-06 13:31:52.200205] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:05.725 [2024-12-06 13:31:52.200208] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:05.725 [2024-12-06 13:31:52.200256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.200263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.200267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x91e690) 00:24:05.725 [2024-12-06 13:31:52.200286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:05.725 [2024-12-06 13:31:52.200303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980100, cid 0, qid 0 00:24:05.725 [2024-12-06 13:31:52.207470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.725 [2024-12-06 13:31:52.207481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.725 [2024-12-06 13:31:52.207485] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.207490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980100) on tqpair=0x91e690 00:24:05.725 [2024-12-06 13:31:52.207505] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:05.725 [2024-12-06 13:31:52.207514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:05.725 [2024-12-06 13:31:52.207520] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:05.725 [2024-12-06 13:31:52.207538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.207542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.207546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x91e690) 00:24:05.725 [2024-12-06 13:31:52.207555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.725 [2024-12-06 13:31:52.207570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980100, cid 0, qid 0 00:24:05.725 [2024-12-06 13:31:52.207782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.725 [2024-12-06 13:31:52.207788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.725 [2024-12-06 13:31:52.207792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.207796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980100) on tqpair=0x91e690 00:24:05.725 [2024-12-06 13:31:52.207802] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:05.725 [2024-12-06 13:31:52.207810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:05.725 [2024-12-06 13:31:52.207817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.207821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.207825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x91e690) 00:24:05.725 [2024-12-06 13:31:52.207832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.725 [2024-12-06 13:31:52.207843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980100, cid 0, qid 0 00:24:05.725 [2024-12-06 13:31:52.207910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.725 [2024-12-06 13:31:52.207916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.725 [2024-12-06 13:31:52.207920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.207924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980100) on tqpair=0x91e690 00:24:05.725 [2024-12-06 13:31:52.207930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:05.725 [2024-12-06 13:31:52.207939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:05.725 [2024-12-06 13:31:52.207953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.207957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.207960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x91e690) 00:24:05.725 [2024-12-06 13:31:52.207967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.725 [2024-12-06 13:31:52.207978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980100, cid 0, qid 0 00:24:05.725 [2024-12-06 13:31:52.208055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.725 [2024-12-06 13:31:52.208062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.725 [2024-12-06 13:31:52.208065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.208069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980100) on tqpair=0x91e690 00:24:05.725 [2024-12-06 13:31:52.208075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:05.725 [2024-12-06 13:31:52.208084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.208088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.208092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x91e690) 00:24:05.725 [2024-12-06 13:31:52.208099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.725 [2024-12-06 13:31:52.208109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980100, cid 0, qid 0 00:24:05.725 [2024-12-06 13:31:52.208176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.725 [2024-12-06 13:31:52.208182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.725 [2024-12-06 13:31:52.208185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.208189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980100) on tqpair=0x91e690 00:24:05.725 [2024-12-06 13:31:52.208195] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:05.725 [2024-12-06 13:31:52.208200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:05.725 [2024-12-06 13:31:52.208209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:05.725 [2024-12-06 13:31:52.208319] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:05.725 [2024-12-06 13:31:52.208324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:05.725 [2024-12-06 13:31:52.208334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.208338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.208342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x91e690) 00:24:05.725 [2024-12-06 13:31:52.208348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.725 [2024-12-06 13:31:52.208359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980100, cid 0, qid 0 00:24:05.725 [2024-12-06 13:31:52.208440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.725 [2024-12-06 13:31:52.208446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.725 [2024-12-06 13:31:52.208449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.208459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980100) on tqpair=0x91e690 00:24:05.725 [2024-12-06 13:31:52.208464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:05.725 [2024-12-06 13:31:52.208479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.208484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.208487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x91e690) 00:24:05.725 [2024-12-06 13:31:52.208494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.725 [2024-12-06 13:31:52.208505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980100, cid 0, qid 0 00:24:05.725 [2024-12-06 13:31:52.208777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.725 [2024-12-06 13:31:52.208785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.725 [2024-12-06 13:31:52.208788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.725 [2024-12-06 13:31:52.208792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980100) on tqpair=0x91e690 00:24:05.725 [2024-12-06 13:31:52.208797] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:05.726 [2024-12-06 13:31:52.208802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:05.726 [2024-12-06 13:31:52.208810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:05.726 [2024-12-06 13:31:52.208825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:05.726 [2024-12-06 13:31:52.208836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.208840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x91e690) 00:24:05.726 [2024-12-06 13:31:52.208847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.726 [2024-12-06 13:31:52.208858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980100, cid 0, qid 0 00:24:05.726 [2024-12-06 13:31:52.209082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.726 [2024-12-06 13:31:52.209088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.726 [2024-12-06 13:31:52.209092] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209097] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x91e690): datao=0, datal=4096, cccid=0 00:24:05.726 [2024-12-06 13:31:52.209102] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x980100) on tqpair(0x91e690): expected_datao=0, payload_size=4096 00:24:05.726 [2024-12-06 13:31:52.209107] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209132] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209137] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.726 [2024-12-06 13:31:52.209320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.726 [2024-12-06 13:31:52.209323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980100) on tqpair=0x91e690 00:24:05.726 [2024-12-06 13:31:52.209337] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:05.726 [2024-12-06 13:31:52.209346] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:05.726 [2024-12-06 13:31:52.209350] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:05.726 [2024-12-06 13:31:52.209359] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:05.726 [2024-12-06 13:31:52.209364] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:05.726 [2024-12-06 13:31:52.209369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:05.726 [2024-12-06 13:31:52.209377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:05.726 [2024-12-06 13:31:52.209385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x91e690) 00:24:05.726 [2024-12-06 13:31:52.209400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:05.726 [2024-12-06 13:31:52.209411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980100, cid 0, qid 0 00:24:05.726 [2024-12-06 13:31:52.209602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.726 [2024-12-06 13:31:52.209609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.726 [2024-12-06 13:31:52.209613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980100) on tqpair=0x91e690 00:24:05.726 [2024-12-06 13:31:52.209626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x91e690) 00:24:05.726 [2024-12-06 13:31:52.209639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.726 [2024-12-06 13:31:52.209646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x91e690) 00:24:05.726 [2024-12-06 13:31:52.209659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.726 [2024-12-06 13:31:52.209665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x91e690) 00:24:05.726 [2024-12-06 13:31:52.209678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.726 [2024-12-06 13:31:52.209685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x91e690) 00:24:05.726 [2024-12-06 13:31:52.209698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.726 [2024-12-06 13:31:52.209703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:05.726 [2024-12-06 13:31:52.209714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:05.726 [2024-12-06 13:31:52.209721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.209724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x91e690) 00:24:05.726 [2024-12-06 13:31:52.209734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.726 [2024-12-06 13:31:52.209746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980100, cid 0, qid 0 00:24:05.726 [2024-12-06 13:31:52.209752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980280, cid 1, qid 0 00:24:05.726 [2024-12-06 13:31:52.209756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980400, cid 2, qid 0 00:24:05.726 [2024-12-06 13:31:52.209761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980580, cid 3, qid 0 00:24:05.726 [2024-12-06 13:31:52.209766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980700, cid 4, qid 0 00:24:05.726 [2024-12-06 13:31:52.210007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.726 [2024-12-06 13:31:52.210014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.726 [2024-12-06 13:31:52.210017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.210021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980700) on tqpair=0x91e690 00:24:05.726 [2024-12-06 13:31:52.210027] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:05.726 [2024-12-06 13:31:52.210032] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:05.726 [2024-12-06 13:31:52.210043] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.210047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x91e690) 00:24:05.726 [2024-12-06 13:31:52.210054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.726 [2024-12-06 13:31:52.210064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980700, cid 4, qid 0 00:24:05.726 [2024-12-06 13:31:52.210254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.726 [2024-12-06 13:31:52.210260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.726 [2024-12-06 13:31:52.210263] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.210267] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x91e690): datao=0, datal=4096, cccid=4 00:24:05.726 [2024-12-06 13:31:52.210271] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x980700) on tqpair(0x91e690): expected_datao=0, payload_size=4096 00:24:05.726 [2024-12-06 13:31:52.210276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.210293] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.210297] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.210462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.726 [2024-12-06 13:31:52.210469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.726 [2024-12-06 13:31:52.210472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.210476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980700) on tqpair=0x91e690 00:24:05.726 [2024-12-06 13:31:52.210491] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:05.726 [2024-12-06 13:31:52.210521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.210525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x91e690) 00:24:05.726 [2024-12-06 13:31:52.210532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.726 [2024-12-06 13:31:52.210540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.210544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.210549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x91e690) 00:24:05.726 [2024-12-06 13:31:52.210555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.726 [2024-12-06 13:31:52.210571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980700, cid 4, qid 0 00:24:05.726 [2024-12-06 13:31:52.210577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980880, cid 5, qid 0 00:24:05.726 [2024-12-06 13:31:52.210841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.726 [2024-12-06 13:31:52.210848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.726 [2024-12-06 13:31:52.210852] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.726 [2024-12-06 13:31:52.210855] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x91e690): datao=0, datal=1024, cccid=4 00:24:05.726 [2024-12-06 13:31:52.210860] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x980700) on tqpair(0x91e690): expected_datao=0, payload_size=1024 00:24:05.726 [2024-12-06 13:31:52.210864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.210871] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.210875] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.210880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.727 [2024-12-06 13:31:52.210886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.727 [2024-12-06 13:31:52.210890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.210894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980880) on tqpair=0x91e690 00:24:05.727 [2024-12-06 13:31:52.253469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.727 [2024-12-06 13:31:52.253482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.727 [2024-12-06 13:31:52.253486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.253490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980700) on tqpair=0x91e690 00:24:05.727 [2024-12-06 13:31:52.253505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.253509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x91e690) 00:24:05.727 [2024-12-06 13:31:52.253517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.727 [2024-12-06 13:31:52.253534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980700, cid 4, qid 0 00:24:05.727 [2024-12-06 13:31:52.253797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.727 [2024-12-06 13:31:52.253804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.727 [2024-12-06 13:31:52.253808] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.253812] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x91e690): datao=0, datal=3072, cccid=4 00:24:05.727 [2024-12-06 13:31:52.253816] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x980700) on tqpair(0x91e690): expected_datao=0, payload_size=3072 00:24:05.727 [2024-12-06 13:31:52.253821] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.253828] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.253832] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.253934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.727 [2024-12-06 13:31:52.253940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.727 [2024-12-06 13:31:52.253944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.253947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980700) on tqpair=0x91e690 00:24:05.727 [2024-12-06 13:31:52.253956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.253964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x91e690) 00:24:05.727 [2024-12-06 13:31:52.253971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.727 [2024-12-06 13:31:52.253986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980700, cid 4, qid 0 00:24:05.727 [2024-12-06 13:31:52.254240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.727 [2024-12-06 13:31:52.254246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.727 [2024-12-06 13:31:52.254250] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.254253] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x91e690): datao=0, datal=8, cccid=4 00:24:05.727 [2024-12-06 13:31:52.254258] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x980700) on tqpair(0x91e690): expected_datao=0, payload_size=8 00:24:05.727 [2024-12-06 13:31:52.254262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.254269] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.254272] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.295695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.727 [2024-12-06 13:31:52.295707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.727 [2024-12-06 13:31:52.295711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.727 [2024-12-06 13:31:52.295715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980700) on tqpair=0x91e690 00:24:05.727 ===================================================== 00:24:05.727 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:05.727 ===================================================== 00:24:05.727 Controller Capabilities/Features 00:24:05.727 ================================ 00:24:05.727 Vendor ID: 0000 00:24:05.727 Subsystem Vendor ID: 0000 00:24:05.727 Serial Number: .................... 00:24:05.727 Model Number: ........................................ 00:24:05.727 Firmware Version: 25.01 00:24:05.727 Recommended Arb Burst: 0 00:24:05.727 IEEE OUI Identifier: 00 00 00 00:24:05.727 Multi-path I/O 00:24:05.727 May have multiple subsystem ports: No 00:24:05.727 May have multiple controllers: No 00:24:05.727 Associated with SR-IOV VF: No 00:24:05.727 Max Data Transfer Size: 131072 00:24:05.727 Max Number of Namespaces: 0 00:24:05.727 Max Number of I/O Queues: 1024 00:24:05.727 NVMe Specification Version (VS): 1.3 00:24:05.727 NVMe Specification Version (Identify): 1.3 00:24:05.727 Maximum Queue Entries: 128 00:24:05.727 Contiguous Queues Required: Yes 00:24:05.727 Arbitration Mechanisms Supported 00:24:05.727 Weighted Round Robin: Not Supported 00:24:05.727 Vendor Specific: Not Supported 00:24:05.727 Reset Timeout: 15000 ms 00:24:05.727 Doorbell Stride: 4 bytes 00:24:05.727 NVM Subsystem Reset: Not Supported 00:24:05.727 Command Sets Supported 00:24:05.727 NVM Command Set: Supported 00:24:05.727 Boot Partition: Not Supported 00:24:05.727 Memory Page Size Minimum: 4096 bytes 00:24:05.727 Memory Page Size Maximum: 4096 bytes 00:24:05.727 Persistent Memory Region: Not Supported 00:24:05.727 Optional Asynchronous Events Supported 00:24:05.727 Namespace Attribute Notices: Not Supported 00:24:05.727 Firmware Activation Notices: Not Supported 00:24:05.727 ANA Change Notices: Not Supported 00:24:05.727 PLE Aggregate Log Change Notices: Not Supported 00:24:05.727 LBA Status Info Alert Notices: Not Supported 00:24:05.727 EGE Aggregate Log Change Notices: Not Supported 00:24:05.727 Normal NVM Subsystem Shutdown event: Not Supported 00:24:05.727 Zone Descriptor Change Notices: Not Supported 00:24:05.727 Discovery Log Change Notices: Supported 00:24:05.727 Controller Attributes 00:24:05.727 128-bit Host Identifier: Not Supported 00:24:05.727 Non-Operational Permissive Mode: Not Supported 00:24:05.727 NVM Sets: Not Supported 00:24:05.727 Read Recovery Levels: Not Supported 00:24:05.727 Endurance Groups: Not Supported 00:24:05.727 Predictable Latency Mode: Not Supported 00:24:05.727 Traffic Based Keep ALive: Not Supported 00:24:05.727 Namespace Granularity: Not Supported 00:24:05.727 SQ Associations: Not Supported 00:24:05.727 UUID List: Not Supported 00:24:05.727 Multi-Domain Subsystem: Not Supported 00:24:05.727 Fixed Capacity Management: Not Supported 00:24:05.727 Variable Capacity Management: Not Supported 00:24:05.727 Delete Endurance Group: Not Supported 00:24:05.727 Delete NVM Set: Not Supported 00:24:05.727 Extended LBA Formats Supported: Not Supported 00:24:05.727 Flexible Data Placement Supported: Not Supported 00:24:05.727 00:24:05.727 Controller Memory Buffer Support 00:24:05.727 ================================ 00:24:05.727 Supported: No 00:24:05.727 00:24:05.727 Persistent Memory Region Support 00:24:05.727 ================================ 00:24:05.727 Supported: No 00:24:05.727 00:24:05.727 Admin Command Set Attributes 00:24:05.727 ============================ 00:24:05.727 Security Send/Receive: Not Supported 00:24:05.727 Format NVM: Not Supported 00:24:05.727 Firmware Activate/Download: Not Supported 00:24:05.727 Namespace Management: Not Supported 00:24:05.727 Device Self-Test: Not Supported 00:24:05.727 Directives: Not Supported 00:24:05.727 NVMe-MI: Not Supported 00:24:05.727 Virtualization Management: Not Supported 00:24:05.727 Doorbell Buffer Config: Not Supported 00:24:05.727 Get LBA Status Capability: Not Supported 00:24:05.727 Command & Feature Lockdown Capability: Not Supported 00:24:05.727 Abort Command Limit: 1 00:24:05.727 Async Event Request Limit: 4 00:24:05.727 Number of Firmware Slots: N/A 00:24:05.727 Firmware Slot 1 Read-Only: N/A 00:24:05.727 Firmware Activation Without Reset: N/A 00:24:05.727 Multiple Update Detection Support: N/A 00:24:05.727 Firmware Update Granularity: No Information Provided 00:24:05.727 Per-Namespace SMART Log: No 00:24:05.727 Asymmetric Namespace Access Log Page: Not Supported 00:24:05.727 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:05.727 Command Effects Log Page: Not Supported 00:24:05.727 Get Log Page Extended Data: Supported 00:24:05.727 Telemetry Log Pages: Not Supported 00:24:05.727 Persistent Event Log Pages: Not Supported 00:24:05.727 Supported Log Pages Log Page: May Support 00:24:05.727 Commands Supported & Effects Log Page: Not Supported 00:24:05.727 Feature Identifiers & Effects Log Page:May Support 00:24:05.727 NVMe-MI Commands & Effects Log Page: May Support 00:24:05.727 Data Area 4 for Telemetry Log: Not Supported 00:24:05.727 Error Log Page Entries Supported: 128 00:24:05.727 Keep Alive: Not Supported 00:24:05.727 00:24:05.727 NVM Command Set Attributes 00:24:05.727 ========================== 00:24:05.727 Submission Queue Entry Size 00:24:05.727 Max: 1 00:24:05.727 Min: 1 00:24:05.727 Completion Queue Entry Size 00:24:05.727 Max: 1 00:24:05.727 Min: 1 00:24:05.727 Number of Namespaces: 0 00:24:05.727 Compare Command: Not Supported 00:24:05.727 Write Uncorrectable Command: Not Supported 00:24:05.727 Dataset Management Command: Not Supported 00:24:05.727 Write Zeroes Command: Not Supported 00:24:05.728 Set Features Save Field: Not Supported 00:24:05.728 Reservations: Not Supported 00:24:05.728 Timestamp: Not Supported 00:24:05.728 Copy: Not Supported 00:24:05.728 Volatile Write Cache: Not Present 00:24:05.728 Atomic Write Unit (Normal): 1 00:24:05.728 Atomic Write Unit (PFail): 1 00:24:05.728 Atomic Compare & Write Unit: 1 00:24:05.728 Fused Compare & Write: Supported 00:24:05.728 Scatter-Gather List 00:24:05.728 SGL Command Set: Supported 00:24:05.728 SGL Keyed: Supported 00:24:05.728 SGL Bit Bucket Descriptor: Not Supported 00:24:05.728 SGL Metadata Pointer: Not Supported 00:24:05.728 Oversized SGL: Not Supported 00:24:05.728 SGL Metadata Address: Not Supported 00:24:05.728 SGL Offset: Supported 00:24:05.728 Transport SGL Data Block: Not Supported 00:24:05.728 Replay Protected Memory Block: Not Supported 00:24:05.728 00:24:05.728 Firmware Slot Information 00:24:05.728 ========================= 00:24:05.728 Active slot: 0 00:24:05.728 00:24:05.728 00:24:05.728 Error Log 00:24:05.728 ========= 00:24:05.728 00:24:05.728 Active Namespaces 00:24:05.728 ================= 00:24:05.728 Discovery Log Page 00:24:05.728 ================== 00:24:05.728 Generation Counter: 2 00:24:05.728 Number of Records: 2 00:24:05.728 Record Format: 0 00:24:05.728 00:24:05.728 Discovery Log Entry 0 00:24:05.728 ---------------------- 00:24:05.728 Transport Type: 3 (TCP) 00:24:05.728 Address Family: 1 (IPv4) 00:24:05.728 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:05.728 Entry Flags: 00:24:05.728 Duplicate Returned Information: 1 00:24:05.728 Explicit Persistent Connection Support for Discovery: 1 00:24:05.728 Transport Requirements: 00:24:05.728 Secure Channel: Not Required 00:24:05.728 Port ID: 0 (0x0000) 00:24:05.728 Controller ID: 65535 (0xffff) 00:24:05.728 Admin Max SQ Size: 128 00:24:05.728 Transport Service Identifier: 4420 00:24:05.728 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:05.728 Transport Address: 10.0.0.2 00:24:05.728 Discovery Log Entry 1 00:24:05.728 ---------------------- 00:24:05.728 Transport Type: 3 (TCP) 00:24:05.728 Address Family: 1 (IPv4) 00:24:05.728 Subsystem Type: 2 (NVM Subsystem) 00:24:05.728 Entry Flags: 00:24:05.728 Duplicate Returned Information: 0 00:24:05.728 Explicit Persistent Connection Support for Discovery: 0 00:24:05.728 Transport Requirements: 00:24:05.728 Secure Channel: Not Required 00:24:05.728 Port ID: 0 (0x0000) 00:24:05.728 Controller ID: 65535 (0xffff) 00:24:05.728 Admin Max SQ Size: 128 00:24:05.728 Transport Service Identifier: 4420 00:24:05.728 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:05.728 Transport Address: 10.0.0.2 [2024-12-06 13:31:52.295822] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:05.728 [2024-12-06 13:31:52.295834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980100) on tqpair=0x91e690 00:24:05.728 [2024-12-06 13:31:52.295842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.728 [2024-12-06 13:31:52.295848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980280) on tqpair=0x91e690 00:24:05.728 [2024-12-06 13:31:52.295853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.728 [2024-12-06 13:31:52.295858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980400) on tqpair=0x91e690 00:24:05.728 [2024-12-06 13:31:52.295862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.728 [2024-12-06 13:31:52.295867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980580) on tqpair=0x91e690 00:24:05.728 [2024-12-06 13:31:52.295872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.728 [2024-12-06 13:31:52.295885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.295889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.295893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x91e690) 00:24:05.728 [2024-12-06 13:31:52.295901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.728 [2024-12-06 13:31:52.295917] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980580, cid 3, qid 0 00:24:05.728 [2024-12-06 13:31:52.296138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.728 [2024-12-06 13:31:52.296145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.728 [2024-12-06 13:31:52.296148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.296152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980580) on tqpair=0x91e690 00:24:05.728 [2024-12-06 13:31:52.296160] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.296164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.296170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x91e690) 00:24:05.728 [2024-12-06 13:31:52.296177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.728 [2024-12-06 13:31:52.296191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980580, cid 3, qid 0 00:24:05.728 [2024-12-06 13:31:52.296423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.728 [2024-12-06 13:31:52.296429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.728 [2024-12-06 13:31:52.296433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.296437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980580) on tqpair=0x91e690 00:24:05.728 [2024-12-06 13:31:52.296443] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:05.728 [2024-12-06 13:31:52.296449] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:05.728 [2024-12-06 13:31:52.296465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.296469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.296473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x91e690) 00:24:05.728 [2024-12-06 13:31:52.296479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.728 [2024-12-06 13:31:52.296491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980580, cid 3, qid 0 00:24:05.728 [2024-12-06 13:31:52.296739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.728 [2024-12-06 13:31:52.296745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.728 [2024-12-06 13:31:52.296749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.296752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980580) on tqpair=0x91e690 00:24:05.728 [2024-12-06 13:31:52.296763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.296767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.296771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x91e690) 00:24:05.728 [2024-12-06 13:31:52.296777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.728 [2024-12-06 13:31:52.296788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980580, cid 3, qid 0 00:24:05.728 [2024-12-06 13:31:52.296992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.728 [2024-12-06 13:31:52.296999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.728 [2024-12-06 13:31:52.297002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.297006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980580) on tqpair=0x91e690 00:24:05.728 [2024-12-06 13:31:52.297016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.297020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.297024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x91e690) 00:24:05.728 [2024-12-06 13:31:52.297030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.728 [2024-12-06 13:31:52.297041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980580, cid 3, qid 0 00:24:05.728 [2024-12-06 13:31:52.297243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.728 [2024-12-06 13:31:52.297250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.728 [2024-12-06 13:31:52.297253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.297257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980580) on tqpair=0x91e690 00:24:05.728 [2024-12-06 13:31:52.297269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.297273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.297277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x91e690) 00:24:05.728 [2024-12-06 13:31:52.297283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.728 [2024-12-06 13:31:52.297294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980580, cid 3, qid 0 00:24:05.728 [2024-12-06 13:31:52.301468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.728 [2024-12-06 13:31:52.301478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.728 [2024-12-06 13:31:52.301481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.301485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980580) on tqpair=0x91e690 00:24:05.728 [2024-12-06 13:31:52.301495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.301499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.728 [2024-12-06 13:31:52.301503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x91e690) 00:24:05.728 [2024-12-06 13:31:52.301510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.729 [2024-12-06 13:31:52.301522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x980580, cid 3, qid 0 00:24:05.729 [2024-12-06 13:31:52.301704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.729 [2024-12-06 13:31:52.301710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.729 [2024-12-06 13:31:52.301713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.729 [2024-12-06 13:31:52.301717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x980580) on tqpair=0x91e690 00:24:05.729 [2024-12-06 13:31:52.301725] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:24:05.729 00:24:05.729 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:05.729 [2024-12-06 13:31:52.348326] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:24:05.729 [2024-12-06 13:31:52.348375] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244941 ] 00:24:05.993 [2024-12-06 13:31:52.404985] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:05.993 [2024-12-06 13:31:52.405053] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:05.993 [2024-12-06 13:31:52.405058] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:05.993 [2024-12-06 13:31:52.405077] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:05.993 [2024-12-06 13:31:52.405088] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:05.993 [2024-12-06 13:31:52.405765] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:05.993 [2024-12-06 13:31:52.405805] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x961690 0 00:24:05.993 [2024-12-06 13:31:52.411469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:05.993 [2024-12-06 13:31:52.411483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:05.993 [2024-12-06 13:31:52.411492] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:05.993 [2024-12-06 13:31:52.411496] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:05.993 [2024-12-06 13:31:52.411533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.411539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.411543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961690) 00:24:05.993 [2024-12-06 13:31:52.411557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:05.993 [2024-12-06 13:31:52.411582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3100, cid 0, qid 0 00:24:05.993 [2024-12-06 13:31:52.419467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.993 [2024-12-06 13:31:52.419477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.993 [2024-12-06 13:31:52.419480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.419485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3100) on tqpair=0x961690 00:24:05.993 [2024-12-06 13:31:52.419494] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:05.993 [2024-12-06 13:31:52.419502] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:05.993 [2024-12-06 13:31:52.419508] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:05.993 [2024-12-06 13:31:52.419522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.419526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.419530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961690) 00:24:05.993 [2024-12-06 13:31:52.419538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.993 [2024-12-06 13:31:52.419554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3100, cid 0, qid 0 00:24:05.993 [2024-12-06 13:31:52.419754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.993 [2024-12-06 13:31:52.419761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.993 [2024-12-06 13:31:52.419765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.419769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3100) on tqpair=0x961690 00:24:05.993 [2024-12-06 13:31:52.419774] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:05.993 [2024-12-06 13:31:52.419782] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:05.993 [2024-12-06 13:31:52.419789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.419793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.419796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961690) 00:24:05.993 [2024-12-06 13:31:52.419803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.993 [2024-12-06 13:31:52.419814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3100, cid 0, qid 0 00:24:05.993 [2024-12-06 13:31:52.423462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.993 [2024-12-06 13:31:52.423470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.993 [2024-12-06 13:31:52.423473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.423477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3100) on tqpair=0x961690 00:24:05.993 [2024-12-06 13:31:52.423483] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:05.993 [2024-12-06 13:31:52.423496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:05.993 [2024-12-06 13:31:52.423503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.423507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.423510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961690) 00:24:05.993 [2024-12-06 13:31:52.423518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.993 [2024-12-06 13:31:52.423530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3100, cid 0, qid 0 00:24:05.993 [2024-12-06 13:31:52.423724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.993 [2024-12-06 13:31:52.423730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.993 [2024-12-06 13:31:52.423734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.423738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3100) on tqpair=0x961690 00:24:05.993 [2024-12-06 13:31:52.423743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:05.993 [2024-12-06 13:31:52.423753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.423756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.423760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961690) 00:24:05.993 [2024-12-06 13:31:52.423767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.993 [2024-12-06 13:31:52.423777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3100, cid 0, qid 0 00:24:05.993 [2024-12-06 13:31:52.423948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.993 [2024-12-06 13:31:52.423954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.993 [2024-12-06 13:31:52.423958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.423962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3100) on tqpair=0x961690 00:24:05.993 [2024-12-06 13:31:52.423966] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:05.993 [2024-12-06 13:31:52.423971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:05.993 [2024-12-06 13:31:52.423979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:05.993 [2024-12-06 13:31:52.424088] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:05.993 [2024-12-06 13:31:52.424093] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:05.993 [2024-12-06 13:31:52.424101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.424104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.424108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961690) 00:24:05.993 [2024-12-06 13:31:52.424114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.993 [2024-12-06 13:31:52.424126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3100, cid 0, qid 0 00:24:05.993 [2024-12-06 13:31:52.424300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.993 [2024-12-06 13:31:52.424306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.993 [2024-12-06 13:31:52.424310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.424314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3100) on tqpair=0x961690 00:24:05.993 [2024-12-06 13:31:52.424321] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:05.993 [2024-12-06 13:31:52.424331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.424335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.424338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961690) 00:24:05.993 [2024-12-06 13:31:52.424345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.993 [2024-12-06 13:31:52.424356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3100, cid 0, qid 0 00:24:05.993 [2024-12-06 13:31:52.424577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.993 [2024-12-06 13:31:52.424584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.993 [2024-12-06 13:31:52.424587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.424591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3100) on tqpair=0x961690 00:24:05.993 [2024-12-06 13:31:52.424596] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:05.993 [2024-12-06 13:31:52.424600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:05.993 [2024-12-06 13:31:52.424608] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:05.993 [2024-12-06 13:31:52.424625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:05.993 [2024-12-06 13:31:52.424635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.993 [2024-12-06 13:31:52.424639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961690) 00:24:05.994 [2024-12-06 13:31:52.424646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.994 [2024-12-06 13:31:52.424657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3100, cid 0, qid 0 00:24:05.994 [2024-12-06 13:31:52.424869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.994 [2024-12-06 13:31:52.424875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.994 [2024-12-06 13:31:52.424879] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.424883] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x961690): datao=0, datal=4096, cccid=0 00:24:05.994 [2024-12-06 13:31:52.424888] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c3100) on tqpair(0x961690): expected_datao=0, payload_size=4096 00:24:05.994 [2024-12-06 13:31:52.424893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.424912] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.424917] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.994 [2024-12-06 13:31:52.425093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.994 [2024-12-06 13:31:52.425097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3100) on tqpair=0x961690 00:24:05.994 [2024-12-06 13:31:52.425109] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:05.994 [2024-12-06 13:31:52.425116] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:05.994 [2024-12-06 13:31:52.425120] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:05.994 [2024-12-06 13:31:52.425127] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:05.994 [2024-12-06 13:31:52.425132] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:05.994 [2024-12-06 13:31:52.425137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:05.994 [2024-12-06 13:31:52.425146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:05.994 [2024-12-06 13:31:52.425153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961690) 00:24:05.994 [2024-12-06 13:31:52.425168] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:05.994 [2024-12-06 13:31:52.425180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3100, cid 0, qid 0 00:24:05.994 [2024-12-06 13:31:52.425406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.994 [2024-12-06 13:31:52.425412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.994 [2024-12-06 13:31:52.425415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3100) on tqpair=0x961690 00:24:05.994 [2024-12-06 13:31:52.425426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x961690) 00:24:05.994 [2024-12-06 13:31:52.425439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.994 [2024-12-06 13:31:52.425446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x961690) 00:24:05.994 [2024-12-06 13:31:52.425464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.994 [2024-12-06 13:31:52.425470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x961690) 00:24:05.994 [2024-12-06 13:31:52.425483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.994 [2024-12-06 13:31:52.425489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x961690) 00:24:05.994 [2024-12-06 13:31:52.425501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.994 [2024-12-06 13:31:52.425506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:05.994 [2024-12-06 13:31:52.425517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:05.994 [2024-12-06 13:31:52.425524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x961690) 00:24:05.994 [2024-12-06 13:31:52.425536] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.994 [2024-12-06 13:31:52.425549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3100, cid 0, qid 0 00:24:05.994 [2024-12-06 13:31:52.425554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3280, cid 1, qid 0 00:24:05.994 [2024-12-06 13:31:52.425559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3400, cid 2, qid 0 00:24:05.994 [2024-12-06 13:31:52.425564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3580, cid 3, qid 0 00:24:05.994 [2024-12-06 13:31:52.425569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3700, cid 4, qid 0 00:24:05.994 [2024-12-06 13:31:52.425806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.994 [2024-12-06 13:31:52.425813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.994 [2024-12-06 13:31:52.425816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3700) on tqpair=0x961690 00:24:05.994 [2024-12-06 13:31:52.425825] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:05.994 [2024-12-06 13:31:52.425830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:05.994 [2024-12-06 13:31:52.425839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:05.994 [2024-12-06 13:31:52.425846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:05.994 [2024-12-06 13:31:52.425852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.425860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x961690) 00:24:05.994 [2024-12-06 13:31:52.425866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:05.994 [2024-12-06 13:31:52.425877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3700, cid 4, qid 0 00:24:05.994 [2024-12-06 13:31:52.426051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.994 [2024-12-06 13:31:52.426057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.994 [2024-12-06 13:31:52.426060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.426064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3700) on tqpair=0x961690 00:24:05.994 [2024-12-06 13:31:52.426134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:05.994 [2024-12-06 13:31:52.426143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:05.994 [2024-12-06 13:31:52.426151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.426155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x961690) 00:24:05.994 [2024-12-06 13:31:52.426162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.994 [2024-12-06 13:31:52.426172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3700, cid 4, qid 0 00:24:05.994 [2024-12-06 13:31:52.426378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.994 [2024-12-06 13:31:52.426384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.994 [2024-12-06 13:31:52.426388] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.426393] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x961690): datao=0, datal=4096, cccid=4 00:24:05.994 [2024-12-06 13:31:52.426398] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c3700) on tqpair(0x961690): expected_datao=0, payload_size=4096 00:24:05.994 [2024-12-06 13:31:52.426403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.426420] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.426424] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.426600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.994 [2024-12-06 13:31:52.426606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.994 [2024-12-06 13:31:52.426610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.426614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3700) on tqpair=0x961690 00:24:05.994 [2024-12-06 13:31:52.426625] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:05.994 [2024-12-06 13:31:52.426637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:05.994 [2024-12-06 13:31:52.426647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:05.994 [2024-12-06 13:31:52.426653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.994 [2024-12-06 13:31:52.426657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x961690) 00:24:05.994 [2024-12-06 13:31:52.426664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.994 [2024-12-06 13:31:52.426675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3700, cid 4, qid 0 00:24:05.994 [2024-12-06 13:31:52.426876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.994 [2024-12-06 13:31:52.426882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.995 [2024-12-06 13:31:52.426886] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.426889] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x961690): datao=0, datal=4096, cccid=4 00:24:05.995 [2024-12-06 13:31:52.426894] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c3700) on tqpair(0x961690): expected_datao=0, payload_size=4096 00:24:05.995 [2024-12-06 13:31:52.426898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.426905] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.426908] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.995 [2024-12-06 13:31:52.427068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.995 [2024-12-06 13:31:52.427071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3700) on tqpair=0x961690 00:24:05.995 [2024-12-06 13:31:52.427089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:05.995 [2024-12-06 13:31:52.427099] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:05.995 [2024-12-06 13:31:52.427106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x961690) 00:24:05.995 [2024-12-06 13:31:52.427116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.995 [2024-12-06 13:31:52.427127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3700, cid 4, qid 0 00:24:05.995 [2024-12-06 13:31:52.427312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.995 [2024-12-06 13:31:52.427318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.995 [2024-12-06 13:31:52.427322] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427325] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x961690): datao=0, datal=4096, cccid=4 00:24:05.995 [2024-12-06 13:31:52.427330] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c3700) on tqpair(0x961690): expected_datao=0, payload_size=4096 00:24:05.995 [2024-12-06 13:31:52.427334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427349] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427353] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.995 [2024-12-06 13:31:52.427528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.995 [2024-12-06 13:31:52.427531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3700) on tqpair=0x961690 00:24:05.995 [2024-12-06 13:31:52.427543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:05.995 [2024-12-06 13:31:52.427551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:05.995 [2024-12-06 13:31:52.427560] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:05.995 [2024-12-06 13:31:52.427569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:05.995 [2024-12-06 13:31:52.427574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:05.995 [2024-12-06 13:31:52.427580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:05.995 [2024-12-06 13:31:52.427586] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:05.995 [2024-12-06 13:31:52.427591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:05.995 [2024-12-06 13:31:52.427596] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:05.995 [2024-12-06 13:31:52.427613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x961690) 00:24:05.995 [2024-12-06 13:31:52.427624] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.995 [2024-12-06 13:31:52.427631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x961690) 00:24:05.995 [2024-12-06 13:31:52.427645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:05.995 [2024-12-06 13:31:52.427659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3700, cid 4, qid 0 00:24:05.995 [2024-12-06 13:31:52.427664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3880, cid 5, qid 0 00:24:05.995 [2024-12-06 13:31:52.427882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.995 [2024-12-06 13:31:52.427888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.995 [2024-12-06 13:31:52.427894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3700) on tqpair=0x961690 00:24:05.995 [2024-12-06 13:31:52.427905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.995 [2024-12-06 13:31:52.427911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.995 [2024-12-06 13:31:52.427914] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3880) on tqpair=0x961690 00:24:05.995 [2024-12-06 13:31:52.427927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.427931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x961690) 00:24:05.995 [2024-12-06 13:31:52.427937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.995 [2024-12-06 13:31:52.427948] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3880, cid 5, qid 0 00:24:05.995 [2024-12-06 13:31:52.428121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.995 [2024-12-06 13:31:52.428127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.995 [2024-12-06 13:31:52.428130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.428134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3880) on tqpair=0x961690 00:24:05.995 [2024-12-06 13:31:52.428143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.428147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x961690) 00:24:05.995 [2024-12-06 13:31:52.428154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.995 [2024-12-06 13:31:52.428164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3880, cid 5, qid 0 00:24:05.995 [2024-12-06 13:31:52.428360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.995 [2024-12-06 13:31:52.428367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.995 [2024-12-06 13:31:52.428370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.428374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3880) on tqpair=0x961690 00:24:05.995 [2024-12-06 13:31:52.428383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.428387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x961690) 00:24:05.995 [2024-12-06 13:31:52.428393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.995 [2024-12-06 13:31:52.428403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3880, cid 5, qid 0 00:24:05.995 [2024-12-06 13:31:52.428592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.995 [2024-12-06 13:31:52.428599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.995 [2024-12-06 13:31:52.428602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.428606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3880) on tqpair=0x961690 00:24:05.995 [2024-12-06 13:31:52.428623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.428627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x961690) 00:24:05.995 [2024-12-06 13:31:52.428634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.995 [2024-12-06 13:31:52.428641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.428645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x961690) 00:24:05.995 [2024-12-06 13:31:52.428651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.995 [2024-12-06 13:31:52.428660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.428664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x961690) 00:24:05.995 [2024-12-06 13:31:52.428670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.995 [2024-12-06 13:31:52.428678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.428682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x961690) 00:24:05.995 [2024-12-06 13:31:52.428688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.995 [2024-12-06 13:31:52.428700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3880, cid 5, qid 0 00:24:05.995 [2024-12-06 13:31:52.428705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3700, cid 4, qid 0 00:24:05.995 [2024-12-06 13:31:52.428710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3a00, cid 6, qid 0 00:24:05.995 [2024-12-06 13:31:52.428714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3b80, cid 7, qid 0 00:24:05.995 [2024-12-06 13:31:52.429025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.995 [2024-12-06 13:31:52.429031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.995 [2024-12-06 13:31:52.429034] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.995 [2024-12-06 13:31:52.429038] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x961690): datao=0, datal=8192, cccid=5 00:24:05.996 [2024-12-06 13:31:52.429043] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c3880) on tqpair(0x961690): expected_datao=0, payload_size=8192 00:24:05.996 [2024-12-06 13:31:52.429047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429123] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429127] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.996 [2024-12-06 13:31:52.429139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.996 [2024-12-06 13:31:52.429142] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429146] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x961690): datao=0, datal=512, cccid=4 00:24:05.996 [2024-12-06 13:31:52.429150] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c3700) on tqpair(0x961690): expected_datao=0, payload_size=512 00:24:05.996 [2024-12-06 13:31:52.429155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429161] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429165] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.996 [2024-12-06 13:31:52.429176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.996 [2024-12-06 13:31:52.429179] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429183] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x961690): datao=0, datal=512, cccid=6 00:24:05.996 [2024-12-06 13:31:52.429187] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c3a00) on tqpair(0x961690): expected_datao=0, payload_size=512 00:24:05.996 [2024-12-06 13:31:52.429192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429198] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429201] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:05.996 [2024-12-06 13:31:52.429215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:05.996 [2024-12-06 13:31:52.429219] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429222] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x961690): datao=0, datal=4096, cccid=7 00:24:05.996 [2024-12-06 13:31:52.429226] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9c3b80) on tqpair(0x961690): expected_datao=0, payload_size=4096 00:24:05.996 [2024-12-06 13:31:52.429231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429242] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429246] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.996 [2024-12-06 13:31:52.429262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.996 [2024-12-06 13:31:52.429265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3880) on tqpair=0x961690 00:24:05.996 [2024-12-06 13:31:52.429282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.996 [2024-12-06 13:31:52.429288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.996 [2024-12-06 13:31:52.429291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3700) on tqpair=0x961690 00:24:05.996 [2024-12-06 13:31:52.429305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.996 [2024-12-06 13:31:52.429311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.996 [2024-12-06 13:31:52.429315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3a00) on tqpair=0x961690 00:24:05.996 [2024-12-06 13:31:52.429325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.996 [2024-12-06 13:31:52.429331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.996 [2024-12-06 13:31:52.429335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.996 [2024-12-06 13:31:52.429339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3b80) on tqpair=0x961690 00:24:05.996 ===================================================== 00:24:05.996 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:05.996 ===================================================== 00:24:05.996 Controller Capabilities/Features 00:24:05.996 ================================ 00:24:05.996 Vendor ID: 8086 00:24:05.996 Subsystem Vendor ID: 8086 00:24:05.996 Serial Number: SPDK00000000000001 00:24:05.996 Model Number: SPDK bdev Controller 00:24:05.996 Firmware Version: 25.01 00:24:05.996 Recommended Arb Burst: 6 00:24:05.996 IEEE OUI Identifier: e4 d2 5c 00:24:05.996 Multi-path I/O 00:24:05.996 May have multiple subsystem ports: Yes 00:24:05.996 May have multiple controllers: Yes 00:24:05.996 Associated with SR-IOV VF: No 00:24:05.996 Max Data Transfer Size: 131072 00:24:05.996 Max Number of Namespaces: 32 00:24:05.996 Max Number of I/O Queues: 127 00:24:05.996 NVMe Specification Version (VS): 1.3 00:24:05.996 NVMe Specification Version (Identify): 1.3 00:24:05.996 Maximum Queue Entries: 128 00:24:05.996 Contiguous Queues Required: Yes 00:24:05.996 Arbitration Mechanisms Supported 00:24:05.996 Weighted Round Robin: Not Supported 00:24:05.996 Vendor Specific: Not Supported 00:24:05.996 Reset Timeout: 15000 ms 00:24:05.996 Doorbell Stride: 4 bytes 00:24:05.996 NVM Subsystem Reset: Not Supported 00:24:05.996 Command Sets Supported 00:24:05.996 NVM Command Set: Supported 00:24:05.996 Boot Partition: Not Supported 00:24:05.996 Memory Page Size Minimum: 4096 bytes 00:24:05.996 Memory Page Size Maximum: 4096 bytes 00:24:05.996 Persistent Memory Region: Not Supported 00:24:05.996 Optional Asynchronous Events Supported 00:24:05.996 Namespace Attribute Notices: Supported 00:24:05.996 Firmware Activation Notices: Not Supported 00:24:05.996 ANA Change Notices: Not Supported 00:24:05.996 PLE Aggregate Log Change Notices: Not Supported 00:24:05.996 LBA Status Info Alert Notices: Not Supported 00:24:05.996 EGE Aggregate Log Change Notices: Not Supported 00:24:05.996 Normal NVM Subsystem Shutdown event: Not Supported 00:24:05.996 Zone Descriptor Change Notices: Not Supported 00:24:05.996 Discovery Log Change Notices: Not Supported 00:24:05.996 Controller Attributes 00:24:05.996 128-bit Host Identifier: Supported 00:24:05.996 Non-Operational Permissive Mode: Not Supported 00:24:05.996 NVM Sets: Not Supported 00:24:05.996 Read Recovery Levels: Not Supported 00:24:05.996 Endurance Groups: Not Supported 00:24:05.996 Predictable Latency Mode: Not Supported 00:24:05.996 Traffic Based Keep ALive: Not Supported 00:24:05.996 Namespace Granularity: Not Supported 00:24:05.996 SQ Associations: Not Supported 00:24:05.996 UUID List: Not Supported 00:24:05.996 Multi-Domain Subsystem: Not Supported 00:24:05.996 Fixed Capacity Management: Not Supported 00:24:05.996 Variable Capacity Management: Not Supported 00:24:05.996 Delete Endurance Group: Not Supported 00:24:05.996 Delete NVM Set: Not Supported 00:24:05.996 Extended LBA Formats Supported: Not Supported 00:24:05.996 Flexible Data Placement Supported: Not Supported 00:24:05.996 00:24:05.996 Controller Memory Buffer Support 00:24:05.996 ================================ 00:24:05.996 Supported: No 00:24:05.996 00:24:05.996 Persistent Memory Region Support 00:24:05.996 ================================ 00:24:05.996 Supported: No 00:24:05.996 00:24:05.996 Admin Command Set Attributes 00:24:05.996 ============================ 00:24:05.996 Security Send/Receive: Not Supported 00:24:05.996 Format NVM: Not Supported 00:24:05.996 Firmware Activate/Download: Not Supported 00:24:05.996 Namespace Management: Not Supported 00:24:05.996 Device Self-Test: Not Supported 00:24:05.996 Directives: Not Supported 00:24:05.996 NVMe-MI: Not Supported 00:24:05.996 Virtualization Management: Not Supported 00:24:05.996 Doorbell Buffer Config: Not Supported 00:24:05.996 Get LBA Status Capability: Not Supported 00:24:05.996 Command & Feature Lockdown Capability: Not Supported 00:24:05.996 Abort Command Limit: 4 00:24:05.996 Async Event Request Limit: 4 00:24:05.996 Number of Firmware Slots: N/A 00:24:05.996 Firmware Slot 1 Read-Only: N/A 00:24:05.996 Firmware Activation Without Reset: N/A 00:24:05.996 Multiple Update Detection Support: N/A 00:24:05.996 Firmware Update Granularity: No Information Provided 00:24:05.996 Per-Namespace SMART Log: No 00:24:05.996 Asymmetric Namespace Access Log Page: Not Supported 00:24:05.996 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:05.996 Command Effects Log Page: Supported 00:24:05.996 Get Log Page Extended Data: Supported 00:24:05.996 Telemetry Log Pages: Not Supported 00:24:05.996 Persistent Event Log Pages: Not Supported 00:24:05.996 Supported Log Pages Log Page: May Support 00:24:05.996 Commands Supported & Effects Log Page: Not Supported 00:24:05.996 Feature Identifiers & Effects Log Page:May Support 00:24:05.996 NVMe-MI Commands & Effects Log Page: May Support 00:24:05.996 Data Area 4 for Telemetry Log: Not Supported 00:24:05.996 Error Log Page Entries Supported: 128 00:24:05.996 Keep Alive: Supported 00:24:05.996 Keep Alive Granularity: 10000 ms 00:24:05.996 00:24:05.996 NVM Command Set Attributes 00:24:05.996 ========================== 00:24:05.996 Submission Queue Entry Size 00:24:05.996 Max: 64 00:24:05.996 Min: 64 00:24:05.996 Completion Queue Entry Size 00:24:05.996 Max: 16 00:24:05.996 Min: 16 00:24:05.997 Number of Namespaces: 32 00:24:05.997 Compare Command: Supported 00:24:05.997 Write Uncorrectable Command: Not Supported 00:24:05.997 Dataset Management Command: Supported 00:24:05.997 Write Zeroes Command: Supported 00:24:05.997 Set Features Save Field: Not Supported 00:24:05.997 Reservations: Supported 00:24:05.997 Timestamp: Not Supported 00:24:05.997 Copy: Supported 00:24:05.997 Volatile Write Cache: Present 00:24:05.997 Atomic Write Unit (Normal): 1 00:24:05.997 Atomic Write Unit (PFail): 1 00:24:05.997 Atomic Compare & Write Unit: 1 00:24:05.997 Fused Compare & Write: Supported 00:24:05.997 Scatter-Gather List 00:24:05.997 SGL Command Set: Supported 00:24:05.997 SGL Keyed: Supported 00:24:05.997 SGL Bit Bucket Descriptor: Not Supported 00:24:05.997 SGL Metadata Pointer: Not Supported 00:24:05.997 Oversized SGL: Not Supported 00:24:05.997 SGL Metadata Address: Not Supported 00:24:05.997 SGL Offset: Supported 00:24:05.997 Transport SGL Data Block: Not Supported 00:24:05.997 Replay Protected Memory Block: Not Supported 00:24:05.997 00:24:05.997 Firmware Slot Information 00:24:05.997 ========================= 00:24:05.997 Active slot: 1 00:24:05.997 Slot 1 Firmware Revision: 25.01 00:24:05.997 00:24:05.997 00:24:05.997 Commands Supported and Effects 00:24:05.997 ============================== 00:24:05.997 Admin Commands 00:24:05.997 -------------- 00:24:05.997 Get Log Page (02h): Supported 00:24:05.997 Identify (06h): Supported 00:24:05.997 Abort (08h): Supported 00:24:05.997 Set Features (09h): Supported 00:24:05.997 Get Features (0Ah): Supported 00:24:05.997 Asynchronous Event Request (0Ch): Supported 00:24:05.997 Keep Alive (18h): Supported 00:24:05.997 I/O Commands 00:24:05.997 ------------ 00:24:05.997 Flush (00h): Supported LBA-Change 00:24:05.997 Write (01h): Supported LBA-Change 00:24:05.997 Read (02h): Supported 00:24:05.997 Compare (05h): Supported 00:24:05.997 Write Zeroes (08h): Supported LBA-Change 00:24:05.997 Dataset Management (09h): Supported LBA-Change 00:24:05.997 Copy (19h): Supported LBA-Change 00:24:05.997 00:24:05.997 Error Log 00:24:05.997 ========= 00:24:05.997 00:24:05.997 Arbitration 00:24:05.997 =========== 00:24:05.997 Arbitration Burst: 1 00:24:05.997 00:24:05.997 Power Management 00:24:05.997 ================ 00:24:05.997 Number of Power States: 1 00:24:05.997 Current Power State: Power State #0 00:24:05.997 Power State #0: 00:24:05.997 Max Power: 0.00 W 00:24:05.997 Non-Operational State: Operational 00:24:05.997 Entry Latency: Not Reported 00:24:05.997 Exit Latency: Not Reported 00:24:05.997 Relative Read Throughput: 0 00:24:05.997 Relative Read Latency: 0 00:24:05.997 Relative Write Throughput: 0 00:24:05.997 Relative Write Latency: 0 00:24:05.997 Idle Power: Not Reported 00:24:05.997 Active Power: Not Reported 00:24:05.997 Non-Operational Permissive Mode: Not Supported 00:24:05.997 00:24:05.997 Health Information 00:24:05.997 ================== 00:24:05.997 Critical Warnings: 00:24:05.997 Available Spare Space: OK 00:24:05.997 Temperature: OK 00:24:05.997 Device Reliability: OK 00:24:05.997 Read Only: No 00:24:05.997 Volatile Memory Backup: OK 00:24:05.997 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:05.997 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:05.997 Available Spare: 0% 00:24:05.997 Available Spare Threshold: 0% 00:24:05.997 Life Percentage Used:[2024-12-06 13:31:52.429438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.997 [2024-12-06 13:31:52.429443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x961690) 00:24:05.997 [2024-12-06 13:31:52.429450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.997 [2024-12-06 13:31:52.429468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3b80, cid 7, qid 0 00:24:05.997 [2024-12-06 13:31:52.429670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.997 [2024-12-06 13:31:52.429677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.997 [2024-12-06 13:31:52.429680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.997 [2024-12-06 13:31:52.429684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3b80) on tqpair=0x961690 00:24:05.997 [2024-12-06 13:31:52.429722] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:05.997 [2024-12-06 13:31:52.429731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3100) on tqpair=0x961690 00:24:05.997 [2024-12-06 13:31:52.429738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.997 [2024-12-06 13:31:52.429743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3280) on tqpair=0x961690 00:24:05.997 [2024-12-06 13:31:52.429748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.997 [2024-12-06 13:31:52.429753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3400) on tqpair=0x961690 00:24:05.997 [2024-12-06 13:31:52.429759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.997 [2024-12-06 13:31:52.429764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3580) on tqpair=0x961690 00:24:05.997 [2024-12-06 13:31:52.429769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.997 [2024-12-06 13:31:52.429777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.997 [2024-12-06 13:31:52.429781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.997 [2024-12-06 13:31:52.429784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x961690) 00:24:05.997 [2024-12-06 13:31:52.429791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.997 [2024-12-06 13:31:52.429803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3580, cid 3, qid 0 00:24:05.997 [2024-12-06 13:31:52.430045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.997 [2024-12-06 13:31:52.430051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.997 [2024-12-06 13:31:52.430055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.997 [2024-12-06 13:31:52.430058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3580) on tqpair=0x961690 00:24:05.997 [2024-12-06 13:31:52.430066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.997 [2024-12-06 13:31:52.430069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.997 [2024-12-06 13:31:52.430073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x961690) 00:24:05.997 [2024-12-06 13:31:52.430079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.997 [2024-12-06 13:31:52.430094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3580, cid 3, qid 0 00:24:05.997 [2024-12-06 13:31:52.430285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.997 [2024-12-06 13:31:52.430292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.997 [2024-12-06 13:31:52.430295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.997 [2024-12-06 13:31:52.430299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3580) on tqpair=0x961690 00:24:05.997 [2024-12-06 13:31:52.430304] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:05.997 [2024-12-06 13:31:52.430308] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:05.997 [2024-12-06 13:31:52.430318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:05.997 [2024-12-06 13:31:52.430322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:05.997 [2024-12-06 13:31:52.430326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x961690) 00:24:05.997 [2024-12-06 13:31:52.430332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.997 [2024-12-06 13:31:52.430343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9c3580, cid 3, qid 0 00:24:05.997 [2024-12-06 13:31:52.434463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:05.997 [2024-12-06 13:31:52.434472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:05.997 [2024-12-06 13:31:52.434475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:05.997 [2024-12-06 13:31:52.434479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9c3580) on tqpair=0x961690 00:24:05.998 [2024-12-06 13:31:52.434488] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:24:05.998 0% 00:24:05.998 Data Units Read: 0 00:24:05.998 Data Units Written: 0 00:24:05.998 Host Read Commands: 0 00:24:05.998 Host Write Commands: 0 00:24:05.998 Controller Busy Time: 0 minutes 00:24:05.998 Power Cycles: 0 00:24:05.998 Power On Hours: 0 hours 00:24:05.998 Unsafe Shutdowns: 0 00:24:05.998 Unrecoverable Media Errors: 0 00:24:05.998 Lifetime Error Log Entries: 0 00:24:05.998 Warning Temperature Time: 0 minutes 00:24:05.998 Critical Temperature Time: 0 minutes 00:24:05.998 00:24:05.998 Number of Queues 00:24:05.998 ================ 00:24:05.998 Number of I/O Submission Queues: 127 00:24:05.998 Number of I/O Completion Queues: 127 00:24:05.998 00:24:05.998 Active Namespaces 00:24:05.998 ================= 00:24:05.998 Namespace ID:1 00:24:05.998 Error Recovery Timeout: Unlimited 00:24:05.998 Command Set Identifier: NVM (00h) 00:24:05.998 Deallocate: Supported 00:24:05.998 Deallocated/Unwritten Error: Not Supported 00:24:05.998 Deallocated Read Value: Unknown 00:24:05.998 Deallocate in Write Zeroes: Not Supported 00:24:05.998 Deallocated Guard Field: 0xFFFF 00:24:05.998 Flush: Supported 00:24:05.998 Reservation: Supported 00:24:05.998 Namespace Sharing Capabilities: Multiple Controllers 00:24:05.998 Size (in LBAs): 131072 (0GiB) 00:24:05.998 Capacity (in LBAs): 131072 (0GiB) 00:24:05.998 Utilization (in LBAs): 131072 (0GiB) 00:24:05.998 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:05.998 EUI64: ABCDEF0123456789 00:24:05.998 UUID: 95493963-69a1-432f-a7ec-847506a04fad 00:24:05.998 Thin Provisioning: Not Supported 00:24:05.998 Per-NS Atomic Units: Yes 00:24:05.998 Atomic Boundary Size (Normal): 0 00:24:05.998 Atomic Boundary Size (PFail): 0 00:24:05.998 Atomic Boundary Offset: 0 00:24:05.998 Maximum Single Source Range Length: 65535 00:24:05.998 Maximum Copy Length: 65535 00:24:05.998 Maximum Source Range Count: 1 00:24:05.998 NGUID/EUI64 Never Reused: No 00:24:05.998 Namespace Write Protected: No 00:24:05.998 Number of LBA Formats: 1 00:24:05.998 Current LBA Format: LBA Format #00 00:24:05.998 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:05.998 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.998 rmmod nvme_tcp 00:24:05.998 rmmod nvme_fabrics 00:24:05.998 rmmod nvme_keyring 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2244592 ']' 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2244592 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2244592 ']' 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2244592 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2244592 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2244592' 00:24:05.998 killing process with pid 2244592 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2244592 00:24:05.998 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2244592 00:24:06.258 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:06.258 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:06.258 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:06.258 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:06.258 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:06.258 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:06.258 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:06.258 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.258 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:06.258 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.258 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.258 13:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.802 13:31:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.802 00:24:08.802 real 0m11.572s 00:24:08.802 user 0m8.268s 00:24:08.802 sys 0m6.203s 00:24:08.802 13:31:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.802 13:31:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:08.802 ************************************ 00:24:08.802 END TEST nvmf_identify 00:24:08.802 ************************************ 00:24:08.802 13:31:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:08.802 13:31:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:08.802 13:31:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.802 13:31:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.802 ************************************ 00:24:08.802 START TEST nvmf_perf 00:24:08.802 ************************************ 00:24:08.802 13:31:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:08.802 * Looking for test storage... 00:24:08.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:08.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.802 --rc genhtml_branch_coverage=1 00:24:08.802 --rc genhtml_function_coverage=1 00:24:08.802 --rc genhtml_legend=1 00:24:08.802 --rc geninfo_all_blocks=1 00:24:08.802 --rc geninfo_unexecuted_blocks=1 00:24:08.802 00:24:08.802 ' 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:08.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.802 --rc genhtml_branch_coverage=1 00:24:08.802 --rc genhtml_function_coverage=1 00:24:08.802 --rc genhtml_legend=1 00:24:08.802 --rc geninfo_all_blocks=1 00:24:08.802 --rc geninfo_unexecuted_blocks=1 00:24:08.802 00:24:08.802 ' 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:08.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.802 --rc genhtml_branch_coverage=1 00:24:08.802 --rc genhtml_function_coverage=1 00:24:08.802 --rc genhtml_legend=1 00:24:08.802 --rc geninfo_all_blocks=1 00:24:08.802 --rc geninfo_unexecuted_blocks=1 00:24:08.802 00:24:08.802 ' 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:08.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.802 --rc genhtml_branch_coverage=1 00:24:08.802 --rc genhtml_function_coverage=1 00:24:08.802 --rc genhtml_legend=1 00:24:08.802 --rc geninfo_all_blocks=1 00:24:08.802 --rc geninfo_unexecuted_blocks=1 00:24:08.802 00:24:08.802 ' 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:08.802 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.803 13:31:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:16.964 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:16.964 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:16.964 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:16.964 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:16.964 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:16.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:24:16.965 00:24:16.965 --- 10.0.0.2 ping statistics --- 00:24:16.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.965 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:24:16.965 00:24:16.965 --- 10.0.0.1 ping statistics --- 00:24:16.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.965 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2249100 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2249100 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2249100 ']' 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.965 13:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:16.965 [2024-12-06 13:32:02.879559] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:24:16.965 [2024-12-06 13:32:02.879637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.965 [2024-12-06 13:32:02.982825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.965 [2024-12-06 13:32:03.036841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.965 [2024-12-06 13:32:03.036893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.965 [2024-12-06 13:32:03.036902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.965 [2024-12-06 13:32:03.036909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.965 [2024-12-06 13:32:03.036916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.965 [2024-12-06 13:32:03.039299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.965 [2024-12-06 13:32:03.039472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.965 [2024-12-06 13:32:03.039615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.965 [2024-12-06 13:32:03.039615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.227 13:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.227 13:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:17.227 13:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:17.227 13:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.227 13:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:17.227 13:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.227 13:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:17.227 13:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:17.799 13:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:17.799 13:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:18.060 13:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:18.060 13:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:18.060 13:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:18.060 13:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:18.060 13:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:18.060 13:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:18.060 13:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:18.321 [2024-12-06 13:32:04.860134] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.321 13:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:18.603 13:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:18.603 13:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:18.867 13:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:18.867 13:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:18.867 13:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.127 [2024-12-06 13:32:05.643678] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.127 13:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:19.387 13:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:19.388 13:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:19.388 13:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:19.388 13:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:20.773 Initializing NVMe Controllers 00:24:20.773 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:20.773 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:20.773 Initialization complete. Launching workers. 00:24:20.773 ======================================================== 00:24:20.773 Latency(us) 00:24:20.773 Device Information : IOPS MiB/s Average min max 00:24:20.773 PCIE (0000:65:00.0) NSID 1 from core 0: 78729.22 307.54 405.95 13.22 4985.98 00:24:20.773 ======================================================== 00:24:20.773 Total : 78729.22 307.54 405.95 13.22 4985.98 00:24:20.773 00:24:20.773 13:32:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:22.159 Initializing NVMe Controllers 00:24:22.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:22.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:22.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:22.160 Initialization complete. Launching workers. 00:24:22.160 ======================================================== 00:24:22.160 Latency(us) 00:24:22.160 Device Information : IOPS MiB/s Average min max 00:24:22.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 96.00 0.37 10820.82 228.20 45642.34 00:24:22.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15221.23 7001.56 47889.39 00:24:22.160 ======================================================== 00:24:22.160 Total : 162.00 0.63 12613.58 228.20 47889.39 00:24:22.160 00:24:22.160 13:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.546 Initializing NVMe Controllers 00:24:23.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:23.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:23.546 Initialization complete. Launching workers. 00:24:23.546 ======================================================== 00:24:23.546 Latency(us) 00:24:23.546 Device Information : IOPS MiB/s Average min max 00:24:23.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11832.12 46.22 2708.49 335.44 6414.52 00:24:23.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3831.72 14.97 8394.60 6709.45 16092.64 00:24:23.546 ======================================================== 00:24:23.546 Total : 15663.84 61.19 4099.44 335.44 16092.64 00:24:23.546 00:24:23.546 13:32:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:23.546 13:32:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:23.546 13:32:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:26.091 Initializing NVMe Controllers 00:24:26.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.091 Controller IO queue size 128, less than required. 00:24:26.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.091 Controller IO queue size 128, less than required. 00:24:26.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:26.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:26.091 Initialization complete. Launching workers. 00:24:26.091 ======================================================== 00:24:26.091 Latency(us) 00:24:26.091 Device Information : IOPS MiB/s Average min max 00:24:26.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1832.70 458.17 71162.79 41806.69 121421.16 00:24:26.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 585.26 146.32 222842.96 63411.29 331828.61 00:24:26.091 ======================================================== 00:24:26.091 Total : 2417.96 604.49 107876.78 41806.69 331828.61 00:24:26.091 00:24:26.091 13:32:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:26.091 No valid NVMe controllers or AIO or URING devices found 00:24:26.091 Initializing NVMe Controllers 00:24:26.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.091 Controller IO queue size 128, less than required. 00:24:26.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.091 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:26.091 Controller IO queue size 128, less than required. 00:24:26.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.091 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:26.091 WARNING: Some requested NVMe devices were skipped 00:24:26.091 13:32:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:28.637 Initializing NVMe Controllers 00:24:28.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.637 Controller IO queue size 128, less than required. 00:24:28.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.637 Controller IO queue size 128, less than required. 00:24:28.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:28.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:28.637 Initialization complete. Launching workers. 00:24:28.637 00:24:28.637 ==================== 00:24:28.637 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:28.637 TCP transport: 00:24:28.637 polls: 35662 00:24:28.637 idle_polls: 25184 00:24:28.637 sock_completions: 10478 00:24:28.637 nvme_completions: 9197 00:24:28.637 submitted_requests: 13870 00:24:28.637 queued_requests: 1 00:24:28.637 00:24:28.637 ==================== 00:24:28.637 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:28.637 TCP transport: 00:24:28.637 polls: 34001 00:24:28.637 idle_polls: 21425 00:24:28.637 sock_completions: 12576 00:24:28.637 nvme_completions: 6941 00:24:28.637 submitted_requests: 10542 00:24:28.637 queued_requests: 1 00:24:28.637 ======================================================== 00:24:28.637 Latency(us) 00:24:28.637 Device Information : IOPS MiB/s Average min max 00:24:28.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2298.98 574.74 56372.27 29567.00 106035.35 00:24:28.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1734.98 433.75 74874.08 28960.18 133187.37 00:24:28.637 ======================================================== 00:24:28.637 Total : 4033.96 1008.49 64329.79 28960.18 133187.37 00:24:28.637 00:24:28.637 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:28.637 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.637 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:28.637 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:28.637 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:28.637 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.637 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:28.637 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.638 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:28.638 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.638 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.900 rmmod nvme_tcp 00:24:28.900 rmmod nvme_fabrics 00:24:28.900 rmmod nvme_keyring 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2249100 ']' 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2249100 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2249100 ']' 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2249100 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2249100 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2249100' 00:24:28.900 killing process with pid 2249100 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2249100 00:24:28.900 13:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2249100 00:24:30.814 13:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:30.814 13:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:30.814 13:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:30.814 13:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:30.814 13:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:30.814 13:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:30.814 13:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:30.814 13:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:30.814 13:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:30.814 13:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.814 13:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.814 13:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:33.364 00:24:33.364 real 0m24.507s 00:24:33.364 user 0m58.959s 00:24:33.364 sys 0m8.729s 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:33.364 ************************************ 00:24:33.364 END TEST nvmf_perf 00:24:33.364 ************************************ 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.364 ************************************ 00:24:33.364 START TEST nvmf_fio_host 00:24:33.364 ************************************ 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:33.364 * Looking for test storage... 00:24:33.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:33.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.364 --rc genhtml_branch_coverage=1 00:24:33.364 --rc genhtml_function_coverage=1 00:24:33.364 --rc genhtml_legend=1 00:24:33.364 --rc geninfo_all_blocks=1 00:24:33.364 --rc geninfo_unexecuted_blocks=1 00:24:33.364 00:24:33.364 ' 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:33.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.364 --rc genhtml_branch_coverage=1 00:24:33.364 --rc genhtml_function_coverage=1 00:24:33.364 --rc genhtml_legend=1 00:24:33.364 --rc geninfo_all_blocks=1 00:24:33.364 --rc geninfo_unexecuted_blocks=1 00:24:33.364 00:24:33.364 ' 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:33.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.364 --rc genhtml_branch_coverage=1 00:24:33.364 --rc genhtml_function_coverage=1 00:24:33.364 --rc genhtml_legend=1 00:24:33.364 --rc geninfo_all_blocks=1 00:24:33.364 --rc geninfo_unexecuted_blocks=1 00:24:33.364 00:24:33.364 ' 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:33.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.364 --rc genhtml_branch_coverage=1 00:24:33.364 --rc genhtml_function_coverage=1 00:24:33.364 --rc genhtml_legend=1 00:24:33.364 --rc geninfo_all_blocks=1 00:24:33.364 --rc geninfo_unexecuted_blocks=1 00:24:33.364 00:24:33.364 ' 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.364 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:33.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:33.365 13:32:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.654 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.654 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:41.654 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:41.654 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:41.654 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:41.654 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:41.654 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:41.654 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:41.654 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:41.655 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:41.655 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:41.655 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:41.655 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.655 13:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:41.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:24:41.655 00:24:41.655 --- 10.0.0.2 ping statistics --- 00:24:41.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.655 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:24:41.655 00:24:41.655 --- 10.0.0.1 ping statistics --- 00:24:41.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.655 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2256029 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:41.655 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:41.656 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2256029 00:24:41.656 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2256029 ']' 00:24:41.656 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.656 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:41.656 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.656 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:41.656 13:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.656 [2024-12-06 13:32:27.332969] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:24:41.656 [2024-12-06 13:32:27.333033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.656 [2024-12-06 13:32:27.431875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:41.656 [2024-12-06 13:32:27.485057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.656 [2024-12-06 13:32:27.485113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.656 [2024-12-06 13:32:27.485122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.656 [2024-12-06 13:32:27.485129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.656 [2024-12-06 13:32:27.485135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.656 [2024-12-06 13:32:27.487252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.656 [2024-12-06 13:32:27.487416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:41.656 [2024-12-06 13:32:27.487580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:41.656 [2024-12-06 13:32:27.487725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.656 13:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.656 13:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:41.656 13:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:41.916 [2024-12-06 13:32:28.314114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.916 13:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:41.916 13:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:41.916 13:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.916 13:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:42.177 Malloc1 00:24:42.178 13:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:42.178 13:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:42.439 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.700 [2024-12-06 13:32:29.193151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.700 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:42.961 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:42.961 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:42.961 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:42.961 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:42.961 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:42.962 13:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:43.224 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:43.224 fio-3.35 00:24:43.224 Starting 1 thread 00:24:45.768 00:24:45.768 test: (groupid=0, jobs=1): err= 0: pid=2256874: Fri Dec 6 13:32:32 2024 00:24:45.768 read: IOPS=13.9k, BW=54.2MiB/s (56.8MB/s)(109MiB/2005msec) 00:24:45.768 slat (usec): min=2, max=277, avg= 2.17, stdev= 2.35 00:24:45.768 clat (usec): min=3338, max=9234, avg=5080.23, stdev=372.97 00:24:45.768 lat (usec): min=3340, max=9236, avg=5082.40, stdev=373.18 00:24:45.768 clat percentiles (usec): 00:24:45.768 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:24:45.768 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:24:45.768 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:24:45.768 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 8160], 99.95th=[ 8455], 00:24:45.768 | 99.99th=[ 9241] 00:24:45.768 bw ( KiB/s): min=54208, max=55928, per=100.00%, avg=55458.00, stdev=834.83, samples=4 00:24:45.768 iops : min=13552, max=13982, avg=13864.50, stdev=208.71, samples=4 00:24:45.768 write: IOPS=13.9k, BW=54.2MiB/s (56.8MB/s)(109MiB/2005msec); 0 zone resets 00:24:45.768 slat (usec): min=2, max=272, avg= 2.23, stdev= 1.80 00:24:45.768 clat (usec): min=2600, max=8065, avg=4097.65, stdev=321.63 00:24:45.768 lat (usec): min=2602, max=8068, avg=4099.88, stdev=321.90 00:24:45.768 clat percentiles (usec): 00:24:45.768 | 1.00th=[ 3392], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884], 00:24:45.768 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:24:45.768 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:24:45.768 | 99.00th=[ 4817], 99.50th=[ 5735], 99.90th=[ 6980], 99.95th=[ 7242], 00:24:45.768 | 99.99th=[ 7963] 00:24:45.768 bw ( KiB/s): min=54592, max=55896, per=100.00%, avg=55514.00, stdev=619.14, samples=4 00:24:45.768 iops : min=13648, max=13974, avg=13878.50, stdev=154.78, samples=4 00:24:45.768 lat (msec) : 4=18.18%, 10=81.82% 00:24:45.768 cpu : usr=72.90%, sys=25.90%, ctx=19, majf=0, minf=16 00:24:45.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:45.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:45.768 issued rwts: total=27797,27816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.768 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:45.768 00:24:45.768 Run status group 0 (all jobs): 00:24:45.768 READ: bw=54.2MiB/s (56.8MB/s), 54.2MiB/s-54.2MiB/s (56.8MB/s-56.8MB/s), io=109MiB (114MB), run=2005-2005msec 00:24:45.768 WRITE: bw=54.2MiB/s (56.8MB/s), 54.2MiB/s-54.2MiB/s (56.8MB/s-56.8MB/s), io=109MiB (114MB), run=2005-2005msec 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:45.768 13:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:46.029 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:46.029 fio-3.35 00:24:46.029 Starting 1 thread 00:24:48.573 00:24:48.573 test: (groupid=0, jobs=1): err= 0: pid=2257465: Fri Dec 6 13:32:34 2024 00:24:48.573 read: IOPS=9678, BW=151MiB/s (159MB/s)(303MiB/2003msec) 00:24:48.573 slat (usec): min=3, max=114, avg= 3.63, stdev= 1.60 00:24:48.573 clat (usec): min=2204, max=14206, avg=8066.89, stdev=1881.60 00:24:48.573 lat (usec): min=2208, max=14209, avg=8070.53, stdev=1881.73 00:24:48.573 clat percentiles (usec): 00:24:48.573 | 1.00th=[ 4080], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6325], 00:24:48.573 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 8029], 60.00th=[ 8586], 00:24:48.573 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11076], 00:24:48.573 | 99.00th=[12256], 99.50th=[12780], 99.90th=[13698], 99.95th=[13960], 00:24:48.573 | 99.99th=[14222] 00:24:48.573 bw ( KiB/s): min=71008, max=86336, per=49.21%, avg=76200.00, stdev=6882.87, samples=4 00:24:48.573 iops : min= 4438, max= 5396, avg=4762.50, stdev=430.18, samples=4 00:24:48.573 write: IOPS=5761, BW=90.0MiB/s (94.4MB/s)(156MiB/1734msec); 0 zone resets 00:24:48.573 slat (usec): min=39, max=405, avg=40.98, stdev= 7.84 00:24:48.573 clat (usec): min=2261, max=15490, avg=9043.08, stdev=1315.53 00:24:48.573 lat (usec): min=2301, max=15566, avg=9084.06, stdev=1317.58 00:24:48.573 clat percentiles (usec): 00:24:48.573 | 1.00th=[ 6128], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7963], 00:24:48.573 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:24:48.573 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10683], 95.00th=[11076], 00:24:48.573 | 99.00th=[12518], 99.50th=[13173], 99.90th=[15139], 99.95th=[15270], 00:24:48.573 | 99.99th=[15533] 00:24:48.573 bw ( KiB/s): min=74144, max=89472, per=86.15%, avg=79416.00, stdev=6843.28, samples=4 00:24:48.573 iops : min= 4634, max= 5592, avg=4963.50, stdev=427.71, samples=4 00:24:48.573 lat (msec) : 4=0.65%, 10=80.46%, 20=18.89% 00:24:48.573 cpu : usr=86.16%, sys=12.64%, ctx=13, majf=0, minf=28 00:24:48.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:48.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:48.573 issued rwts: total=19386,9990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:48.573 00:24:48.573 Run status group 0 (all jobs): 00:24:48.573 READ: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=303MiB (318MB), run=2003-2003msec 00:24:48.573 WRITE: bw=90.0MiB/s (94.4MB/s), 90.0MiB/s-90.0MiB/s (94.4MB/s-94.4MB/s), io=156MiB (164MB), run=1734-1734msec 00:24:48.573 13:32:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:48.573 rmmod nvme_tcp 00:24:48.573 rmmod nvme_fabrics 00:24:48.573 rmmod nvme_keyring 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2256029 ']' 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2256029 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2256029 ']' 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2256029 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:48.573 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2256029 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2256029' 00:24:48.833 killing process with pid 2256029 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2256029 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2256029 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.833 13:32:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:51.383 00:24:51.383 real 0m17.942s 00:24:51.383 user 1m2.459s 00:24:51.383 sys 0m7.881s 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.383 ************************************ 00:24:51.383 END TEST nvmf_fio_host 00:24:51.383 ************************************ 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.383 ************************************ 00:24:51.383 START TEST nvmf_failover 00:24:51.383 ************************************ 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:51.383 * Looking for test storage... 00:24:51.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:51.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.383 --rc genhtml_branch_coverage=1 00:24:51.383 --rc genhtml_function_coverage=1 00:24:51.383 --rc genhtml_legend=1 00:24:51.383 --rc geninfo_all_blocks=1 00:24:51.383 --rc geninfo_unexecuted_blocks=1 00:24:51.383 00:24:51.383 ' 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:51.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.383 --rc genhtml_branch_coverage=1 00:24:51.383 --rc genhtml_function_coverage=1 00:24:51.383 --rc genhtml_legend=1 00:24:51.383 --rc geninfo_all_blocks=1 00:24:51.383 --rc geninfo_unexecuted_blocks=1 00:24:51.383 00:24:51.383 ' 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:51.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.383 --rc genhtml_branch_coverage=1 00:24:51.383 --rc genhtml_function_coverage=1 00:24:51.383 --rc genhtml_legend=1 00:24:51.383 --rc geninfo_all_blocks=1 00:24:51.383 --rc geninfo_unexecuted_blocks=1 00:24:51.383 00:24:51.383 ' 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:51.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.383 --rc genhtml_branch_coverage=1 00:24:51.383 --rc genhtml_function_coverage=1 00:24:51.383 --rc genhtml_legend=1 00:24:51.383 --rc geninfo_all_blocks=1 00:24:51.383 --rc geninfo_unexecuted_blocks=1 00:24:51.383 00:24:51.383 ' 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.383 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.384 13:32:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:59.528 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:59.528 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:59.528 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:59.528 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.528 13:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.528 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.528 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.528 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:59.528 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.528 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.528 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.528 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:59.528 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:59.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:24:59.528 00:24:59.528 --- 10.0.0.2 ping statistics --- 00:24:59.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.528 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:24:59.528 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:24:59.528 00:24:59.528 --- 10.0.0.1 ping statistics --- 00:24:59.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.528 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2262078 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2262078 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2262078 ']' 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.529 13:32:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.529 [2024-12-06 13:32:45.359773] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:24:59.529 [2024-12-06 13:32:45.359836] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.529 [2024-12-06 13:32:45.458428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:59.529 [2024-12-06 13:32:45.510401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.529 [2024-12-06 13:32:45.510465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.529 [2024-12-06 13:32:45.510474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.529 [2024-12-06 13:32:45.510481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.529 [2024-12-06 13:32:45.510487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.529 [2024-12-06 13:32:45.512331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.529 [2024-12-06 13:32:45.512521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:59.529 [2024-12-06 13:32:45.512555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.789 13:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.790 13:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:59.790 13:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:59.790 13:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:59.790 13:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.790 13:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.790 13:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:59.790 [2024-12-06 13:32:46.406026] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.790 13:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:00.050 Malloc0 00:25:00.050 13:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:00.312 13:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:00.573 13:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.573 [2024-12-06 13:32:47.213306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.834 13:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:00.834 [2024-12-06 13:32:47.417931] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:00.834 13:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:01.094 [2024-12-06 13:32:47.618674] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:01.094 13:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2262678 00:25:01.094 13:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:01.094 13:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:01.094 13:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2262678 /var/tmp/bdevperf.sock 00:25:01.094 13:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2262678 ']' 00:25:01.094 13:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.094 13:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.094 13:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.094 13:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.094 13:32:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:02.034 13:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.034 13:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:02.034 13:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:02.295 NVMe0n1 00:25:02.295 13:32:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:02.554 00:25:02.554 13:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2262912 00:25:02.554 13:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:02.554 13:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:03.937 13:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.937 [2024-12-06 13:32:50.353512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.937 [2024-12-06 13:32:50.353611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.938 [2024-12-06 13:32:50.353996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.939 [2024-12-06 13:32:50.354001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.939 [2024-12-06 13:32:50.354005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.939 [2024-12-06 13:32:50.354009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.939 [2024-12-06 13:32:50.354015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.939 [2024-12-06 13:32:50.354019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.939 [2024-12-06 13:32:50.354024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.939 [2024-12-06 13:32:50.354028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc7fed0 is same with the state(6) to be set 00:25:03.939 13:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:07.240 13:32:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:07.240 00:25:07.240 13:32:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:07.240 [2024-12-06 13:32:53.854194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.240 [2024-12-06 13:32:53.854371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 [2024-12-06 13:32:53.854594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80980 is same with the state(6) to be set 00:25:07.241 13:32:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:10.539 13:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.539 [2024-12-06 13:32:57.043658] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.539 13:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:11.482 13:32:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:11.743 [2024-12-06 13:32:58.234394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.743 [2024-12-06 13:32:58.234497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 [2024-12-06 13:32:58.234847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46140 is same with the state(6) to be set 00:25:11.744 13:32:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2262912 00:25:18.335 { 00:25:18.335 "results": [ 00:25:18.335 { 00:25:18.335 "job": "NVMe0n1", 00:25:18.335 "core_mask": "0x1", 00:25:18.335 "workload": "verify", 00:25:18.335 "status": "finished", 00:25:18.335 "verify_range": { 00:25:18.335 "start": 0, 00:25:18.335 "length": 16384 00:25:18.335 }, 00:25:18.335 "queue_depth": 128, 00:25:18.335 "io_size": 4096, 00:25:18.335 "runtime": 15.005065, 00:25:18.335 "iops": 12486.583696905012, 00:25:18.335 "mibps": 48.775717566035205, 00:25:18.335 "io_failed": 7845, 00:25:18.335 "io_timeout": 0, 00:25:18.335 "avg_latency_us": 9817.21234614196, 00:25:18.335 "min_latency_us": 505.17333333333335, 00:25:18.335 "max_latency_us": 32986.45333333333 00:25:18.335 } 00:25:18.335 ], 00:25:18.335 "core_count": 1 00:25:18.335 } 00:25:18.335 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2262678 00:25:18.335 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2262678 ']' 00:25:18.335 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2262678 00:25:18.335 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:18.335 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.335 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2262678 00:25:18.335 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:18.335 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:18.335 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2262678' 00:25:18.335 killing process with pid 2262678 00:25:18.335 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2262678 00:25:18.335 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2262678 00:25:18.335 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:18.335 [2024-12-06 13:32:47.715016] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:25:18.335 [2024-12-06 13:32:47.715102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2262678 ] 00:25:18.335 [2024-12-06 13:32:47.811534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.335 [2024-12-06 13:32:47.863782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.335 Running I/O for 15 seconds... 00:25:18.335 11051.00 IOPS, 43.17 MiB/s [2024-12-06T12:33:04.994Z] [2024-12-06 13:32:50.355363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.335 [2024-12-06 13:32:50.355397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.335 [2024-12-06 13:32:50.355414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.335 [2024-12-06 13:32:50.355423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.335 [2024-12-06 13:32:50.355433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.335 [2024-12-06 13:32:50.355441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.335 [2024-12-06 13:32:50.355450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.335 [2024-12-06 13:32:50.355464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.335 [2024-12-06 13:32:50.355474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.335 [2024-12-06 13:32:50.355481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.335 [2024-12-06 13:32:50.355490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.355984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.355994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.356001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.356011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.356019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.356034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.356041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.356051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.356058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.356068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.356075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.356085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.356092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.356102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.356109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.356119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.356126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.356136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.356143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.356153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.336 [2024-12-06 13:32:50.356160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.336 [2024-12-06 13:32:50.356170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.337 [2024-12-06 13:32:50.356827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.337 [2024-12-06 13:32:50.356837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.338 [2024-12-06 13:32:50.356844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.356854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.356861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.356871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.356878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.356887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.356896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.356906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.356913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.356922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.356929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.356938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.356946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.356955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.356962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.356972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.356979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.356988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.356996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.338 [2024-12-06 13:32:50.357501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.338 [2024-12-06 13:32:50.357508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:50.357517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.339 [2024-12-06 13:32:50.357525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:50.357534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.339 [2024-12-06 13:32:50.357543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:50.357553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.339 [2024-12-06 13:32:50.357560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:50.357569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.339 [2024-12-06 13:32:50.357577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:50.357596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.339 [2024-12-06 13:32:50.357603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.339 [2024-12-06 13:32:50.357610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95976 len:8 PRP1 0x0 PRP2 0x0 00:25:18.339 [2024-12-06 13:32:50.357618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:50.357659] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:18.339 [2024-12-06 13:32:50.357680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.339 [2024-12-06 13:32:50.357688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:50.357697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.339 [2024-12-06 13:32:50.357704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:50.357713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.339 [2024-12-06 13:32:50.357720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:50.357728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.339 [2024-12-06 13:32:50.357735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:50.357742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:18.339 [2024-12-06 13:32:50.357773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b799d0 (9): Bad file descriptor 00:25:18.339 [2024-12-06 13:32:50.361375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:18.339 [2024-12-06 13:32:50.426335] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:18.339 11217.00 IOPS, 43.82 MiB/s [2024-12-06T12:33:04.998Z] 11415.67 IOPS, 44.59 MiB/s [2024-12-06T12:33:04.998Z] 11743.00 IOPS, 45.87 MiB/s [2024-12-06T12:33:04.998Z] [2024-12-06 13:32:53.855766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.855993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.855998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.856004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.856010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.856017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.856022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.856029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.856034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.856041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.856046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.856052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.856057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.856064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.856069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.856076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.856081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.856088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.856093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.856099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.856104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.339 [2024-12-06 13:32:53.856111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.339 [2024-12-06 13:32:53.856116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.340 [2024-12-06 13:32:53.856129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.340 [2024-12-06 13:32:53.856140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.340 [2024-12-06 13:32:53.856152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.340 [2024-12-06 13:32:53.856164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.340 [2024-12-06 13:32:53.856175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.340 [2024-12-06 13:32:53.856508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.340 [2024-12-06 13:32:53.856515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.341 [2024-12-06 13:32:53.856930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.341 [2024-12-06 13:32:53.856965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.341 [2024-12-06 13:32:53.856972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.856977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.856983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.856988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.856995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.857000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.857011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.857024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.857035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.857047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.857058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.857069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.857081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.857093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.857104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.857116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.342 [2024-12-06 13:32:53.857128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.857149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64136 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.857154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.857331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.857335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64144 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.857342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.857352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.857356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64152 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.857361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.857370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.857375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64160 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.857380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.857389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.857396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64168 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.857401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.857410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.857414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64176 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.857419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.857428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.857433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64184 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.857438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.857446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.857451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64192 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.857460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.857473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.857477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64200 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.857482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.857491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.857499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64208 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.857505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.857514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.857518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64216 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.857523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.857533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.857537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64224 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.857542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.857551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.857555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64232 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.857560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.857566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.857569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.870169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64240 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.870198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.870211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.870217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.870223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64248 len:8 PRP1 0x0 PRP2 0x0 00:25:18.342 [2024-12-06 13:32:53.870230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.342 [2024-12-06 13:32:53.870238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.342 [2024-12-06 13:32:53.870243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.342 [2024-12-06 13:32:53.870250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63232 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63240 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63248 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63256 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63264 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63272 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63280 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63288 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63296 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63304 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63312 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63320 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63328 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63336 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63344 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63352 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63360 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63368 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63376 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63384 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63392 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63400 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63408 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.343 [2024-12-06 13:32:53.870821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.343 [2024-12-06 13:32:53.870827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.343 [2024-12-06 13:32:53.870832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63416 len:8 PRP1 0x0 PRP2 0x0 00:25:18.343 [2024-12-06 13:32:53.870839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.870846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.870851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.870857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63424 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.870864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.870871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.870878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.870883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63432 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.870890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.870897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.870902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.870908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63440 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.870915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.870922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.870927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.870932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63448 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.870939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.870946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.870952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.870957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63456 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.870964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.870971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.870976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.870982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63464 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.870988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.870995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63472 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63480 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63496 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63504 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63512 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63520 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63528 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63536 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63544 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63552 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63560 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63568 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63576 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.344 [2024-12-06 13:32:53.871323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.344 [2024-12-06 13:32:53.871329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63584 len:8 PRP1 0x0 PRP2 0x0 00:25:18.344 [2024-12-06 13:32:53.871335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.344 [2024-12-06 13:32:53.871342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63592 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63600 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63608 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63616 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63624 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63632 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63640 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63648 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63656 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63664 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63672 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63680 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63688 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63696 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63704 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63712 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.871754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63720 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.871761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.871768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.871773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.879538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63728 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.879566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.879580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.879586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.879592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63736 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.879599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.879606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.879611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.879617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63744 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.879624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.879635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.879640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.879646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63752 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.879652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.879660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.879665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.879670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63760 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.879677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.879684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.345 [2024-12-06 13:32:53.879689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.345 [2024-12-06 13:32:53.879695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63768 len:8 PRP1 0x0 PRP2 0x0 00:25:18.345 [2024-12-06 13:32:53.879701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.345 [2024-12-06 13:32:53.879708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.879713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.879719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63776 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.879725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.879732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.879738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.879743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63784 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.879750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.879757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.879762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.879768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63792 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.879774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.879781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.879786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.879792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63800 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.879798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.879806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.879812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.879817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63808 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.879826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.879832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.879838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.879844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63816 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.879850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.879858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.879863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.879869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63824 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.879876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.879883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.879888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.879894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63832 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.879900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.879908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.879913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.879919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63840 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.879926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.879933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.879938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.879944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63848 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.879952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.879959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.879964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.879970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63856 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.879976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.879983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.879988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.879994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63864 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.880000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.880007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.880012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.880019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63872 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.880026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.880033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.880038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.880044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63880 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.880050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.880057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.880062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.880068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63888 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.880075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.880082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.880087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.880092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63896 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.880099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.880106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.880111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.880117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63904 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.880123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.880130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.880135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.880141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63912 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.880148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.880154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.880160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.880165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63920 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.880172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.880179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.880185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.880191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63928 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.880198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.880207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.880213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.880218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63936 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.880225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.880232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.880238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.880244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63944 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.880251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.880258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.880263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.346 [2024-12-06 13:32:53.880269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63952 len:8 PRP1 0x0 PRP2 0x0 00:25:18.346 [2024-12-06 13:32:53.880276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.346 [2024-12-06 13:32:53.880283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.346 [2024-12-06 13:32:53.880288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63960 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63968 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63976 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63984 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63992 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63488 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64000 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64008 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64016 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64024 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64032 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64040 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64048 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64056 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64064 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64072 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64080 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64088 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64096 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64104 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64112 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64120 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.347 [2024-12-06 13:32:53.880887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64128 len:8 PRP1 0x0 PRP2 0x0 00:25:18.347 [2024-12-06 13:32:53.880896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.347 [2024-12-06 13:32:53.880905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.347 [2024-12-06 13:32:53.880913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.348 [2024-12-06 13:32:53.880920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64136 len:8 PRP1 0x0 PRP2 0x0 00:25:18.348 [2024-12-06 13:32:53.880929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:53.880978] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:18.348 [2024-12-06 13:32:53.881015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.348 [2024-12-06 13:32:53.881026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:53.881038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.348 [2024-12-06 13:32:53.881047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:53.881058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.348 [2024-12-06 13:32:53.881067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:53.881077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.348 [2024-12-06 13:32:53.881086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:53.881095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:18.348 [2024-12-06 13:32:53.881149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b799d0 (9): Bad file descriptor 00:25:18.348 [2024-12-06 13:32:53.885668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:18.348 [2024-12-06 13:32:53.951637] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:18.348 11752.40 IOPS, 45.91 MiB/s [2024-12-06T12:33:05.007Z] 11954.17 IOPS, 46.70 MiB/s [2024-12-06T12:33:05.007Z] 12092.00 IOPS, 47.23 MiB/s [2024-12-06T12:33:05.007Z] 12186.50 IOPS, 47.60 MiB/s [2024-12-06T12:33:05.007Z] [2024-12-06 13:32:58.235884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.348 [2024-12-06 13:32:58.235913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.235928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.235934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.235941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.235947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.235953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.235959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.235966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.235971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.235978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.235984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.235990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.235995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.348 [2024-12-06 13:32:58.236225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.348 [2024-12-06 13:32:58.236236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.348 [2024-12-06 13:32:58.236248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.348 [2024-12-06 13:32:58.236260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.348 [2024-12-06 13:32:58.236272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.348 [2024-12-06 13:32:58.236283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.348 [2024-12-06 13:32:58.236290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.348 [2024-12-06 13:32:58.236295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.349 [2024-12-06 13:32:58.236307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.349 [2024-12-06 13:32:58.236320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.349 [2024-12-06 13:32:58.236708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.349 [2024-12-06 13:32:58.236714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-06 13:32:58.236796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-06 13:32:58.236808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-06 13:32:58.236819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-06 13:32:58.236833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-06 13:32:58.236845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-06 13:32:58.236856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-06 13:32:58.236868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.236993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.236998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.237005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.237010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.237016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.237021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.237028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:18.350 [2024-12-06 13:32:58.237033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.237049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.350 [2024-12-06 13:32:58.237054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9992 len:8 PRP1 0x0 PRP2 0x0 00:25:18.350 [2024-12-06 13:32:58.237060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.237068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.350 [2024-12-06 13:32:58.237071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.350 [2024-12-06 13:32:58.237076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10000 len:8 PRP1 0x0 PRP2 0x0 00:25:18.350 [2024-12-06 13:32:58.237082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.237087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.350 [2024-12-06 13:32:58.237091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.350 [2024-12-06 13:32:58.237096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10008 len:8 PRP1 0x0 PRP2 0x0 00:25:18.350 [2024-12-06 13:32:58.237101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.237106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.350 [2024-12-06 13:32:58.237110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.350 [2024-12-06 13:32:58.237115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:8 PRP1 0x0 PRP2 0x0 00:25:18.350 [2024-12-06 13:32:58.237119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.237125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.350 [2024-12-06 13:32:58.237129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.350 [2024-12-06 13:32:58.237135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10024 len:8 PRP1 0x0 PRP2 0x0 00:25:18.350 [2024-12-06 13:32:58.237140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.237145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.350 [2024-12-06 13:32:58.237149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.350 [2024-12-06 13:32:58.237153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10032 len:8 PRP1 0x0 PRP2 0x0 00:25:18.350 [2024-12-06 13:32:58.237158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.237163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.350 [2024-12-06 13:32:58.237167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.350 [2024-12-06 13:32:58.237172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10040 len:8 PRP1 0x0 PRP2 0x0 00:25:18.350 [2024-12-06 13:32:58.237177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.350 [2024-12-06 13:32:58.237182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.350 [2024-12-06 13:32:58.237186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.350 [2024-12-06 13:32:58.237190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10056 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10064 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10072 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10088 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10096 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10104 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10120 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10128 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10136 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10152 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10160 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.237472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.237475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.237480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10168 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.237485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.250443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.250471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.250480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.250490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.250496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.250501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.250507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10184 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.250513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.250519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.250523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.250527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10192 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.250533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.250538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.250542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.250547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9304 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.250552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.250558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.250561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.250566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.250575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.250581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.250585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.250589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9320 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.250595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.250600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.250604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.250609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9328 len:8 PRP1 0x0 PRP2 0x0 00:25:18.351 [2024-12-06 13:32:58.250614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.351 [2024-12-06 13:32:58.250619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.351 [2024-12-06 13:32:58.250623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.351 [2024-12-06 13:32:58.250628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9336 len:8 PRP1 0x0 PRP2 0x0 00:25:18.352 [2024-12-06 13:32:58.250633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.352 [2024-12-06 13:32:58.250638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.352 [2024-12-06 13:32:58.250642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.352 [2024-12-06 13:32:58.250647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:8 PRP1 0x0 PRP2 0x0 00:25:18.352 [2024-12-06 13:32:58.250652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.352 [2024-12-06 13:32:58.250657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.352 [2024-12-06 13:32:58.250661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.352 [2024-12-06 13:32:58.250665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9352 len:8 PRP1 0x0 PRP2 0x0 00:25:18.352 [2024-12-06 13:32:58.250670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.352 [2024-12-06 13:32:58.250676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:18.352 [2024-12-06 13:32:58.250681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:18.352 [2024-12-06 13:32:58.250685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9360 len:8 PRP1 0x0 PRP2 0x0 00:25:18.352 [2024-12-06 13:32:58.250690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.352 [2024-12-06 13:32:58.250728] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:18.352 [2024-12-06 13:32:58.250753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.352 [2024-12-06 13:32:58.250759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.352 [2024-12-06 13:32:58.250767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.352 [2024-12-06 13:32:58.250772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.352 [2024-12-06 13:32:58.250780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.352 [2024-12-06 13:32:58.250786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.352 [2024-12-06 13:32:58.250791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.352 [2024-12-06 13:32:58.250797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.352 [2024-12-06 13:32:58.250802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:18.352 [2024-12-06 13:32:58.250836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b799d0 (9): Bad file descriptor 00:25:18.352 [2024-12-06 13:32:58.253968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:18.352 [2024-12-06 13:32:58.283882] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:18.352 12208.33 IOPS, 47.69 MiB/s [2024-12-06T12:33:05.011Z] 12273.40 IOPS, 47.94 MiB/s [2024-12-06T12:33:05.011Z] 12336.91 IOPS, 48.19 MiB/s [2024-12-06T12:33:05.011Z] 12394.25 IOPS, 48.42 MiB/s [2024-12-06T12:33:05.011Z] 12426.92 IOPS, 48.54 MiB/s [2024-12-06T12:33:05.011Z] 12459.00 IOPS, 48.67 MiB/s 00:25:18.352 Latency(us) 00:25:18.352 [2024-12-06T12:33:05.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.352 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:18.352 Verification LBA range: start 0x0 length 0x4000 00:25:18.352 NVMe0n1 : 15.01 12486.58 48.78 522.82 0.00 9817.21 505.17 32986.45 00:25:18.352 [2024-12-06T12:33:05.011Z] =================================================================================================================== 00:25:18.352 [2024-12-06T12:33:05.011Z] Total : 12486.58 48.78 522.82 0.00 9817.21 505.17 32986.45 00:25:18.352 Received shutdown signal, test time was about 15.000000 seconds 00:25:18.352 00:25:18.352 Latency(us) 00:25:18.352 [2024-12-06T12:33:05.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.352 [2024-12-06T12:33:05.011Z] =================================================================================================================== 00:25:18.352 [2024-12-06T12:33:05.011Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.352 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:18.352 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:18.352 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:18.352 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2265878 00:25:18.352 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2265878 /var/tmp/bdevperf.sock 00:25:18.352 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:18.352 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2265878 ']' 00:25:18.352 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:18.352 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.352 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:18.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:18.352 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.352 13:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:18.938 13:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.938 13:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:18.938 13:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:18.938 [2024-12-06 13:33:05.530409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:18.938 13:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:19.198 [2024-12-06 13:33:05.714847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:19.198 13:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:19.459 NVMe0n1 00:25:19.459 13:33:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:19.718 00:25:19.718 13:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:19.978 00:25:19.978 13:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:19.978 13:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:20.237 13:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:20.497 13:33:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:23.789 13:33:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:23.789 13:33:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:23.789 13:33:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2267471 00:25:23.789 13:33:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:23.789 13:33:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2267471 00:25:24.728 { 00:25:24.728 "results": [ 00:25:24.728 { 00:25:24.728 "job": "NVMe0n1", 00:25:24.728 "core_mask": "0x1", 00:25:24.728 "workload": "verify", 00:25:24.728 "status": "finished", 00:25:24.728 "verify_range": { 00:25:24.728 "start": 0, 00:25:24.728 "length": 16384 00:25:24.728 }, 00:25:24.728 "queue_depth": 128, 00:25:24.728 "io_size": 4096, 00:25:24.728 "runtime": 1.008354, 00:25:24.728 "iops": 12860.562857885227, 00:25:24.728 "mibps": 50.236573663614166, 00:25:24.728 "io_failed": 0, 00:25:24.728 "io_timeout": 0, 00:25:24.728 "avg_latency_us": 9919.677565288916, 00:25:24.728 "min_latency_us": 1686.1866666666667, 00:25:24.728 "max_latency_us": 10649.6 00:25:24.728 } 00:25:24.728 ], 00:25:24.728 "core_count": 1 00:25:24.728 } 00:25:24.728 13:33:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:24.728 [2024-12-06 13:33:04.577646] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:25:24.728 [2024-12-06 13:33:04.577705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265878 ] 00:25:24.728 [2024-12-06 13:33:04.663458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.728 [2024-12-06 13:33:04.692496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.728 [2024-12-06 13:33:06.909814] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:24.728 [2024-12-06 13:33:06.909854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.728 [2024-12-06 13:33:06.909863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.728 [2024-12-06 13:33:06.909871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.728 [2024-12-06 13:33:06.909876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.728 [2024-12-06 13:33:06.909882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.728 [2024-12-06 13:33:06.909888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.728 [2024-12-06 13:33:06.909893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.728 [2024-12-06 13:33:06.909899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.728 [2024-12-06 13:33:06.909904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:24.728 [2024-12-06 13:33:06.909927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:24.728 [2024-12-06 13:33:06.909938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9919d0 (9): Bad file descriptor 00:25:24.728 [2024-12-06 13:33:06.919640] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:24.728 Running I/O for 1 seconds... 00:25:24.728 12836.00 IOPS, 50.14 MiB/s 00:25:24.728 Latency(us) 00:25:24.728 [2024-12-06T12:33:11.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.728 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:24.728 Verification LBA range: start 0x0 length 0x4000 00:25:24.728 NVMe0n1 : 1.01 12860.56 50.24 0.00 0.00 9919.68 1686.19 10649.60 00:25:24.728 [2024-12-06T12:33:11.387Z] =================================================================================================================== 00:25:24.728 [2024-12-06T12:33:11.387Z] Total : 12860.56 50.24 0.00 0.00 9919.68 1686.19 10649.60 00:25:24.728 13:33:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:24.728 13:33:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:24.988 13:33:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:24.988 13:33:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:24.988 13:33:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:25.250 13:33:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:25.511 13:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2265878 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2265878 ']' 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2265878 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2265878 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2265878' 00:25:28.813 killing process with pid 2265878 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2265878 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2265878 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:28.813 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:29.073 rmmod nvme_tcp 00:25:29.073 rmmod nvme_fabrics 00:25:29.073 rmmod nvme_keyring 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2262078 ']' 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2262078 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2262078 ']' 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2262078 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2262078 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2262078' 00:25:29.073 killing process with pid 2262078 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2262078 00:25:29.073 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2262078 00:25:29.334 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:29.335 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:29.335 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:29.335 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:29.335 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:29.335 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:29.335 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:29.335 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:29.335 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:29.335 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.335 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.335 13:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:31.247 00:25:31.247 real 0m40.314s 00:25:31.247 user 2m3.874s 00:25:31.247 sys 0m8.770s 00:25:31.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:31.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:31.247 ************************************ 00:25:31.247 END TEST nvmf_failover 00:25:31.247 ************************************ 00:25:31.508 13:33:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:31.508 13:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:31.508 13:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:31.508 13:33:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.508 ************************************ 00:25:31.508 START TEST nvmf_host_discovery 00:25:31.508 ************************************ 00:25:31.508 13:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:31.508 * Looking for test storage... 00:25:31.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.508 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:31.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.770 --rc genhtml_branch_coverage=1 00:25:31.770 --rc genhtml_function_coverage=1 00:25:31.770 --rc genhtml_legend=1 00:25:31.770 --rc geninfo_all_blocks=1 00:25:31.770 --rc geninfo_unexecuted_blocks=1 00:25:31.770 00:25:31.770 ' 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:31.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.770 --rc genhtml_branch_coverage=1 00:25:31.770 --rc genhtml_function_coverage=1 00:25:31.770 --rc genhtml_legend=1 00:25:31.770 --rc geninfo_all_blocks=1 00:25:31.770 --rc geninfo_unexecuted_blocks=1 00:25:31.770 00:25:31.770 ' 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:31.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.770 --rc genhtml_branch_coverage=1 00:25:31.770 --rc genhtml_function_coverage=1 00:25:31.770 --rc genhtml_legend=1 00:25:31.770 --rc geninfo_all_blocks=1 00:25:31.770 --rc geninfo_unexecuted_blocks=1 00:25:31.770 00:25:31.770 ' 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:31.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.770 --rc genhtml_branch_coverage=1 00:25:31.770 --rc genhtml_function_coverage=1 00:25:31.770 --rc genhtml_legend=1 00:25:31.770 --rc geninfo_all_blocks=1 00:25:31.770 --rc geninfo_unexecuted_blocks=1 00:25:31.770 00:25:31.770 ' 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:31.770 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:31.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:31.771 13:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.909 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.909 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:39.910 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:39.910 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:39.910 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:39.910 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:39.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:25:39.910 00:25:39.910 --- 10.0.0.2 ping statistics --- 00:25:39.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.910 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:25:39.910 00:25:39.910 --- 10.0.0.1 ping statistics --- 00:25:39.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.910 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.910 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2272748 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2272748 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2272748 ']' 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.911 13:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.911 [2024-12-06 13:33:25.791614] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:25:39.911 [2024-12-06 13:33:25.791682] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.911 [2024-12-06 13:33:25.891546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.911 [2024-12-06 13:33:25.942359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.911 [2024-12-06 13:33:25.942410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.911 [2024-12-06 13:33:25.942419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.911 [2024-12-06 13:33:25.942426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.911 [2024-12-06 13:33:25.942432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.911 [2024-12-06 13:33:25.943190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.172 [2024-12-06 13:33:26.650658] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.172 [2024-12-06 13:33:26.662882] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.172 null0 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.172 null1 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2273016 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2273016 /tmp/host.sock 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2273016 ']' 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:40.172 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.172 13:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.172 [2024-12-06 13:33:26.762830] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:25:40.172 [2024-12-06 13:33:26.762906] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273016 ] 00:25:40.432 [2024-12-06 13:33:26.858247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.432 [2024-12-06 13:33:26.911273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.057 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.345 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:41.345 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:41.345 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.345 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.345 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.345 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:41.345 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.345 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.345 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.345 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.345 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.345 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.345 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.346 [2024-12-06 13:33:27.930171] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.346 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.667 13:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:41.667 13:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:42.236 [2024-12-06 13:33:28.639439] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:42.236 [2024-12-06 13:33:28.639463] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:42.236 [2024-12-06 13:33:28.639477] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:42.236 [2024-12-06 13:33:28.766860] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:42.495 [2024-12-06 13:33:28.991110] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:42.495 [2024-12-06 13:33:28.992089] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d50320:1 started. 00:25:42.495 [2024-12-06 13:33:28.993727] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:42.495 [2024-12-06 13:33:28.993746] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:42.495 [2024-12-06 13:33:28.998613] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d50320 was disconnected and freed. delete nvme_qpair. 00:25:42.495 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.495 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:42.755 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.756 [2024-12-06 13:33:29.365327] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d506a0:1 started. 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:42.756 [2024-12-06 13:33:29.369060] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d506a0 was disconnected and freed. delete nvme_qpair. 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.756 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.016 [2024-12-06 13:33:29.470234] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:43.016 [2024-12-06 13:33:29.470484] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:43.016 [2024-12-06 13:33:29.470504] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.016 [2024-12-06 13:33:29.599893] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:43.016 13:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:43.275 [2024-12-06 13:33:29.704795] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:43.275 [2024-12-06 13:33:29.704833] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:43.275 [2024-12-06 13:33:29.704842] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:43.275 [2024-12-06 13:33:29.704847] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.215 [2024-12-06 13:33:30.746652] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:44.215 [2024-12-06 13:33:30.746678] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:44.215 [2024-12-06 13:33:30.755419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.215 [2024-12-06 13:33:30.755441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-12-06 13:33:30.755450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.215 [2024-12-06 13:33:30.755463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-12-06 13:33:30.755471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.215 [2024-12-06 13:33:30.755479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-12-06 13:33:30.755487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.215 [2024-12-06 13:33:30.755494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.215 [2024-12-06 13:33:30.755502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22470 is same with the state(6) to be set 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.215 [2024-12-06 13:33:30.765430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22470 (9): Bad file descriptor 00:25:44.215 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.215 [2024-12-06 13:33:30.775470] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:44.215 [2024-12-06 13:33:30.775482] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:44.215 [2024-12-06 13:33:30.775490] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:44.215 [2024-12-06 13:33:30.775500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:44.215 [2024-12-06 13:33:30.775520] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:44.216 [2024-12-06 13:33:30.775752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.216 [2024-12-06 13:33:30.775767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22470 with addr=10.0.0.2, port=4420 00:25:44.216 [2024-12-06 13:33:30.775775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22470 is same with the state(6) to be set 00:25:44.216 [2024-12-06 13:33:30.775788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22470 (9): Bad file descriptor 00:25:44.216 [2024-12-06 13:33:30.775806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:44.216 [2024-12-06 13:33:30.775814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:44.216 [2024-12-06 13:33:30.775823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:44.216 [2024-12-06 13:33:30.775830] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:44.216 [2024-12-06 13:33:30.775836] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:44.216 [2024-12-06 13:33:30.775842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:44.216 [2024-12-06 13:33:30.785550] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:44.216 [2024-12-06 13:33:30.785562] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:44.216 [2024-12-06 13:33:30.785567] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:44.216 [2024-12-06 13:33:30.785572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:44.216 [2024-12-06 13:33:30.785586] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:44.216 [2024-12-06 13:33:30.785884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.216 [2024-12-06 13:33:30.785895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22470 with addr=10.0.0.2, port=4420 00:25:44.216 [2024-12-06 13:33:30.785903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22470 is same with the state(6) to be set 00:25:44.216 [2024-12-06 13:33:30.785913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22470 (9): Bad file descriptor 00:25:44.216 [2024-12-06 13:33:30.785938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:44.216 [2024-12-06 13:33:30.785945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:44.216 [2024-12-06 13:33:30.785952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:44.216 [2024-12-06 13:33:30.785959] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:44.216 [2024-12-06 13:33:30.785963] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:44.216 [2024-12-06 13:33:30.785968] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:44.216 [2024-12-06 13:33:30.795618] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:44.216 [2024-12-06 13:33:30.795631] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:44.216 [2024-12-06 13:33:30.795636] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:44.216 [2024-12-06 13:33:30.795641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:44.216 [2024-12-06 13:33:30.795655] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:44.216 [2024-12-06 13:33:30.795941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.216 [2024-12-06 13:33:30.795953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22470 with addr=10.0.0.2, port=4420 00:25:44.216 [2024-12-06 13:33:30.795961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22470 is same with the state(6) to be set 00:25:44.216 [2024-12-06 13:33:30.795972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22470 (9): Bad file descriptor 00:25:44.216 [2024-12-06 13:33:30.795988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:44.216 [2024-12-06 13:33:30.795995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:44.216 [2024-12-06 13:33:30.796002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:44.216 [2024-12-06 13:33:30.796008] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:44.216 [2024-12-06 13:33:30.796013] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:44.216 [2024-12-06 13:33:30.796017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:44.216 [2024-12-06 13:33:30.805687] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:44.216 [2024-12-06 13:33:30.805702] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:44.216 [2024-12-06 13:33:30.805707] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:44.216 [2024-12-06 13:33:30.805711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:44.216 [2024-12-06 13:33:30.805725] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:44.216 [2024-12-06 13:33:30.805917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.216 [2024-12-06 13:33:30.805928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22470 with addr=10.0.0.2, port=4420 00:25:44.216 [2024-12-06 13:33:30.805936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22470 is same with the state(6) to be set 00:25:44.216 [2024-12-06 13:33:30.805947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22470 (9): Bad file descriptor 00:25:44.216 [2024-12-06 13:33:30.805957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:44.216 [2024-12-06 13:33:30.805964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:44.216 [2024-12-06 13:33:30.805971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:44.216 [2024-12-06 13:33:30.805977] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:44.216 [2024-12-06 13:33:30.805982] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:44.216 [2024-12-06 13:33:30.805986] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.216 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.216 [2024-12-06 13:33:30.815756] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:44.216 [2024-12-06 13:33:30.815770] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:44.216 [2024-12-06 13:33:30.815775] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:44.216 [2024-12-06 13:33:30.815780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:44.216 [2024-12-06 13:33:30.815794] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:44.216 [2024-12-06 13:33:30.816077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.216 [2024-12-06 13:33:30.816088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22470 with addr=10.0.0.2, port=4420 00:25:44.216 [2024-12-06 13:33:30.816096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22470 is same with the state(6) to be set 00:25:44.216 [2024-12-06 13:33:30.816107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22470 (9): Bad file descriptor 00:25:44.216 [2024-12-06 13:33:30.816117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:44.216 [2024-12-06 13:33:30.816124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:44.216 [2024-12-06 13:33:30.816136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:44.216 [2024-12-06 13:33:30.816142] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:44.216 [2024-12-06 13:33:30.816147] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:44.216 [2024-12-06 13:33:30.816152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:44.216 [2024-12-06 13:33:30.825825] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:44.216 [2024-12-06 13:33:30.825836] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:44.216 [2024-12-06 13:33:30.825840] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:44.216 [2024-12-06 13:33:30.825845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:44.216 [2024-12-06 13:33:30.825858] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:44.216 [2024-12-06 13:33:30.826070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.216 [2024-12-06 13:33:30.826081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d22470 with addr=10.0.0.2, port=4420 00:25:44.216 [2024-12-06 13:33:30.826088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d22470 is same with the state(6) to be set 00:25:44.217 [2024-12-06 13:33:30.826099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22470 (9): Bad file descriptor 00:25:44.217 [2024-12-06 13:33:30.826109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:44.217 [2024-12-06 13:33:30.826116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:44.217 [2024-12-06 13:33:30.826123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:44.217 [2024-12-06 13:33:30.826129] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:44.217 [2024-12-06 13:33:30.826134] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:44.217 [2024-12-06 13:33:30.826138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:44.217 [2024-12-06 13:33:30.834248] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:44.217 [2024-12-06 13:33:30.834266] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:44.217 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:44.478 13:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.478 13:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.858 [2024-12-06 13:33:32.148619] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:45.858 [2024-12-06 13:33:32.148634] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:45.858 [2024-12-06 13:33:32.148644] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:45.858 [2024-12-06 13:33:32.236891] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:45.858 [2024-12-06 13:33:32.341650] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:45.858 [2024-12-06 13:33:32.342298] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1d56670:1 started. 00:25:45.858 [2024-12-06 13:33:32.343683] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:45.858 [2024-12-06 13:33:32.343705] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:45.858 [2024-12-06 13:33:32.346770] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1d56670 was disconnected and freed. delete nvme_qpair. 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.858 request: 00:25:45.858 { 00:25:45.858 "name": "nvme", 00:25:45.858 "trtype": "tcp", 00:25:45.858 "traddr": "10.0.0.2", 00:25:45.858 "adrfam": "ipv4", 00:25:45.858 "trsvcid": "8009", 00:25:45.858 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:45.858 "wait_for_attach": true, 00:25:45.858 "method": "bdev_nvme_start_discovery", 00:25:45.858 "req_id": 1 00:25:45.858 } 00:25:45.858 Got JSON-RPC error response 00:25:45.858 response: 00:25:45.858 { 00:25:45.858 "code": -17, 00:25:45.858 "message": "File exists" 00:25:45.858 } 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.858 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.858 request: 00:25:45.858 { 00:25:45.858 "name": "nvme_second", 00:25:45.858 "trtype": "tcp", 00:25:45.858 "traddr": "10.0.0.2", 00:25:45.858 "adrfam": "ipv4", 00:25:45.858 "trsvcid": "8009", 00:25:45.858 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:45.859 "wait_for_attach": true, 00:25:45.859 "method": "bdev_nvme_start_discovery", 00:25:45.859 "req_id": 1 00:25:45.859 } 00:25:45.859 Got JSON-RPC error response 00:25:45.859 response: 00:25:45.859 { 00:25:45.859 "code": -17, 00:25:45.859 "message": "File exists" 00:25:45.859 } 00:25:45.859 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:45.859 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:45.859 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:45.859 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:45.859 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:45.859 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:45.859 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:45.859 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:45.859 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:45.859 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.859 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.859 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:45.859 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.119 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:46.119 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:46.119 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:46.119 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.119 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.120 13:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.061 [2024-12-06 13:33:33.607602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.061 [2024-12-06 13:33:33.607625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d56c80 with addr=10.0.0.2, port=8010 00:25:47.061 [2024-12-06 13:33:33.607635] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:47.061 [2024-12-06 13:33:33.607640] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:47.061 [2024-12-06 13:33:33.607646] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:47.999 [2024-12-06 13:33:34.609951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.999 [2024-12-06 13:33:34.609974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e90 with addr=10.0.0.2, port=8010 00:25:47.999 [2024-12-06 13:33:34.609983] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:47.999 [2024-12-06 13:33:34.609988] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:47.999 [2024-12-06 13:33:34.609992] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:49.379 [2024-12-06 13:33:35.611948] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:49.379 request: 00:25:49.379 { 00:25:49.379 "name": "nvme_second", 00:25:49.379 "trtype": "tcp", 00:25:49.379 "traddr": "10.0.0.2", 00:25:49.379 "adrfam": "ipv4", 00:25:49.379 "trsvcid": "8010", 00:25:49.379 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:49.379 "wait_for_attach": false, 00:25:49.379 "attach_timeout_ms": 3000, 00:25:49.379 "method": "bdev_nvme_start_discovery", 00:25:49.379 "req_id": 1 00:25:49.379 } 00:25:49.379 Got JSON-RPC error response 00:25:49.379 response: 00:25:49.379 { 00:25:49.379 "code": -110, 00:25:49.379 "message": "Connection timed out" 00:25:49.379 } 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2273016 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:49.379 rmmod nvme_tcp 00:25:49.379 rmmod nvme_fabrics 00:25:49.379 rmmod nvme_keyring 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2272748 ']' 00:25:49.379 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2272748 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2272748 ']' 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2272748 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2272748 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2272748' 00:25:49.380 killing process with pid 2272748 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2272748 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2272748 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.380 13:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.925 13:33:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:51.925 00:25:51.925 real 0m20.031s 00:25:51.925 user 0m22.992s 00:25:51.925 sys 0m7.229s 00:25:51.925 13:33:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:51.925 13:33:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.925 ************************************ 00:25:51.925 END TEST nvmf_host_discovery 00:25:51.925 ************************************ 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.925 ************************************ 00:25:51.925 START TEST nvmf_host_multipath_status 00:25:51.925 ************************************ 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:51.925 * Looking for test storage... 00:25:51.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:51.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.925 --rc genhtml_branch_coverage=1 00:25:51.925 --rc genhtml_function_coverage=1 00:25:51.925 --rc genhtml_legend=1 00:25:51.925 --rc geninfo_all_blocks=1 00:25:51.925 --rc geninfo_unexecuted_blocks=1 00:25:51.925 00:25:51.925 ' 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:51.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.925 --rc genhtml_branch_coverage=1 00:25:51.925 --rc genhtml_function_coverage=1 00:25:51.925 --rc genhtml_legend=1 00:25:51.925 --rc geninfo_all_blocks=1 00:25:51.925 --rc geninfo_unexecuted_blocks=1 00:25:51.925 00:25:51.925 ' 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:51.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.925 --rc genhtml_branch_coverage=1 00:25:51.925 --rc genhtml_function_coverage=1 00:25:51.925 --rc genhtml_legend=1 00:25:51.925 --rc geninfo_all_blocks=1 00:25:51.925 --rc geninfo_unexecuted_blocks=1 00:25:51.925 00:25:51.925 ' 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:51.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.925 --rc genhtml_branch_coverage=1 00:25:51.925 --rc genhtml_function_coverage=1 00:25:51.925 --rc genhtml_legend=1 00:25:51.925 --rc geninfo_all_blocks=1 00:25:51.925 --rc geninfo_unexecuted_blocks=1 00:25:51.925 00:25:51.925 ' 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.925 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:51.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:51.926 13:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:00.072 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:00.072 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:00.072 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:00.073 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:00.073 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:00.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:26:00.073 00:26:00.073 --- 10.0.0.2 ping statistics --- 00:26:00.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.073 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:26:00.073 00:26:00.073 --- 10.0.0.1 ping statistics --- 00:26:00.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.073 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2279156 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2279156 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2279156 ']' 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.073 13:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:00.073 [2024-12-06 13:33:45.927603] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:26:00.073 [2024-12-06 13:33:45.927670] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.073 [2024-12-06 13:33:46.027926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:00.073 [2024-12-06 13:33:46.079724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.073 [2024-12-06 13:33:46.079777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.073 [2024-12-06 13:33:46.079785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.073 [2024-12-06 13:33:46.079792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.073 [2024-12-06 13:33:46.079799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.073 [2024-12-06 13:33:46.081493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.073 [2024-12-06 13:33:46.081504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.335 13:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.335 13:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:00.335 13:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:00.335 13:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:00.335 13:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:00.335 13:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.335 13:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2279156 00:26:00.335 13:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:00.335 [2024-12-06 13:33:46.966124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.597 13:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:00.597 Malloc0 00:26:00.597 13:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:00.858 13:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:01.120 13:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.382 [2024-12-06 13:33:47.797180] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.382 13:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:01.382 [2024-12-06 13:33:47.989658] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:01.382 13:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2279568 00:26:01.382 13:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:01.382 13:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:01.382 13:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2279568 /var/tmp/bdevperf.sock 00:26:01.382 13:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2279568 ']' 00:26:01.382 13:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:01.382 13:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.382 13:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:01.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:01.382 13:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.382 13:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:02.327 13:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.327 13:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:02.327 13:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:02.588 13:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:03.185 Nvme0n1 00:26:03.185 13:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:03.445 Nvme0n1 00:26:03.445 13:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:03.445 13:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:05.986 13:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:05.986 13:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:05.986 13:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:05.986 13:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:06.924 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:06.924 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:06.924 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.924 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:06.924 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.924 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:06.924 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.924 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.184 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.184 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.184 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.184 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.444 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.444 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.444 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.444 13:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.705 13:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.705 13:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:07.705 13:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.705 13:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:07.705 13:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.705 13:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:07.705 13:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.705 13:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:07.966 13:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.966 13:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:07.966 13:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:08.227 13:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:08.227 13:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:09.611 13:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:09.611 13:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:09.611 13:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.611 13:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:09.611 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.611 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:09.611 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.611 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:09.611 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.611 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:09.611 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.611 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:09.872 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.872 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:09.872 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.872 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.132 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.132 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:10.132 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.132 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.132 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.132 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:10.132 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.132 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.391 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.391 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:10.391 13:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:10.651 13:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:10.651 13:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:12.035 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:12.035 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:12.035 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.035 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.035 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.035 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:12.035 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.035 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:12.035 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.035 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:12.035 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.035 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:12.296 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.296 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:12.296 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.296 13:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:12.558 13:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.558 13:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:12.558 13:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.558 13:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:12.818 13:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.818 13:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:12.818 13:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.818 13:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:12.818 13:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.818 13:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:12.818 13:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:13.080 13:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:13.340 13:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:14.281 13:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:14.281 13:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:14.281 13:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.281 13:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:14.541 13:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.541 13:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:14.541 13:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.541 13:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:14.541 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.541 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:14.541 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.541 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:14.821 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.821 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:14.821 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.821 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.144 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.144 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:15.144 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.144 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.144 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.144 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:15.144 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.144 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.404 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.404 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:15.404 13:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:15.404 13:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:15.664 13:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:16.607 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:16.607 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:16.607 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.607 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:16.868 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.868 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:16.868 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.868 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.128 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.128 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.128 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.128 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:17.128 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.389 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:17.389 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.389 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:17.389 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.389 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:17.389 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:17.389 13:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.649 13:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.649 13:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:17.649 13:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.649 13:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.910 13:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.910 13:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:17.910 13:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:17.910 13:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:18.171 13:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:19.112 13:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:19.112 13:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:19.112 13:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.112 13:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:19.372 13:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.372 13:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:19.372 13:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.372 13:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:19.631 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.631 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:19.631 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.631 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:19.631 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.631 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:19.631 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.631 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.891 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.891 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:19.891 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.891 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.151 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.151 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.151 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.151 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.151 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.151 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:20.410 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:20.410 13:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:20.669 13:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:20.928 13:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:21.867 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:21.867 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:21.867 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.867 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:21.867 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.867 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:22.129 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.129 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.129 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.129 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.129 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.129 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.390 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.390 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.390 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.390 13:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.650 13:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.650 13:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:22.650 13:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.650 13:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:22.650 13:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.650 13:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:22.650 13:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.650 13:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:22.911 13:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.911 13:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:22.911 13:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.172 13:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:23.438 13:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:24.379 13:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:24.379 13:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:24.379 13:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.379 13:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:24.379 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.379 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:24.379 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.379 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:24.638 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.638 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:24.638 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.638 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:24.898 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.898 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:24.898 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.898 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.157 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.157 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.157 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.157 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.157 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.157 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.157 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.157 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.443 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.443 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:25.443 13:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:25.702 13:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:25.702 13:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:27.083 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:27.083 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.083 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.083 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.083 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.083 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:27.083 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.083 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.083 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.083 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.083 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.083 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.343 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.343 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.343 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.343 13:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:27.604 13:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.604 13:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:27.604 13:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.604 13:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:27.604 13:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.604 13:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:27.604 13:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.604 13:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:27.865 13:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.865 13:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:27.865 13:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:28.126 13:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:28.386 13:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:29.322 13:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:29.323 13:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:29.323 13:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.323 13:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:29.582 13:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.582 13:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:29.582 13:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.582 13:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:29.582 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.582 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:29.582 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.582 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:29.841 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.841 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:29.841 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.842 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.101 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.102 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:30.102 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.102 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.102 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.102 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:30.102 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.102 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.362 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.362 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2279568 00:26:30.362 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2279568 ']' 00:26:30.362 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2279568 00:26:30.362 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:30.362 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.362 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279568 00:26:30.362 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:30.362 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:30.362 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279568' 00:26:30.362 killing process with pid 2279568 00:26:30.362 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2279568 00:26:30.362 13:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2279568 00:26:30.362 { 00:26:30.362 "results": [ 00:26:30.362 { 00:26:30.362 "job": "Nvme0n1", 00:26:30.362 "core_mask": "0x4", 00:26:30.362 "workload": "verify", 00:26:30.362 "status": "terminated", 00:26:30.362 "verify_range": { 00:26:30.362 "start": 0, 00:26:30.362 "length": 16384 00:26:30.362 }, 00:26:30.362 "queue_depth": 128, 00:26:30.362 "io_size": 4096, 00:26:30.362 "runtime": 26.839243, 00:26:30.362 "iops": 11938.563244872443, 00:26:30.362 "mibps": 46.63501267528298, 00:26:30.362 "io_failed": 0, 00:26:30.362 "io_timeout": 0, 00:26:30.362 "avg_latency_us": 10700.99241558528, 00:26:30.362 "min_latency_us": 349.8666666666667, 00:26:30.362 "max_latency_us": 3019898.88 00:26:30.362 } 00:26:30.362 ], 00:26:30.362 "core_count": 1 00:26:30.362 } 00:26:30.626 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2279568 00:26:30.626 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:30.626 [2024-12-06 13:33:48.070108] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:26:30.626 [2024-12-06 13:33:48.070190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279568 ] 00:26:30.626 [2024-12-06 13:33:48.164707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.626 [2024-12-06 13:33:48.214750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.626 Running I/O for 90 seconds... 00:26:30.626 10403.00 IOPS, 40.64 MiB/s [2024-12-06T12:34:17.285Z] 10821.00 IOPS, 42.27 MiB/s [2024-12-06T12:34:17.285Z] 10929.00 IOPS, 42.69 MiB/s [2024-12-06T12:34:17.285Z] 11318.50 IOPS, 44.21 MiB/s [2024-12-06T12:34:17.285Z] 11633.00 IOPS, 45.44 MiB/s [2024-12-06T12:34:17.285Z] 11880.67 IOPS, 46.41 MiB/s [2024-12-06T12:34:17.285Z] 12014.14 IOPS, 46.93 MiB/s [2024-12-06T12:34:17.285Z] 12134.00 IOPS, 47.40 MiB/s [2024-12-06T12:34:17.285Z] 12223.11 IOPS, 47.75 MiB/s [2024-12-06T12:34:17.285Z] 12290.00 IOPS, 48.01 MiB/s [2024-12-06T12:34:17.285Z] 12342.36 IOPS, 48.21 MiB/s [2024-12-06T12:34:17.285Z] [2024-12-06 13:34:02.015012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.626 [2024-12-06 13:34:02.015551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.626 [2024-12-06 13:34:02.015557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.015985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.015998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.016003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.016014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.016019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.016083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.016089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.016102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.016108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.016121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.016126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.627 [2024-12-06 13:34:02.016139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.627 [2024-12-06 13:34:02.016144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.016714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.016719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.017053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.017059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.017074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.017079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.017093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.017098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.628 [2024-12-06 13:34:02.017114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.628 [2024-12-06 13:34:02.017119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.629 [2024-12-06 13:34:02.017139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.629 [2024-12-06 13:34:02.017158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.629 [2024-12-06 13:34:02.017177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.629 [2024-12-06 13:34:02.017196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.629 [2024-12-06 13:34:02.017719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.629 [2024-12-06 13:34:02.017799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.629 [2024-12-06 13:34:02.017814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:02.017820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:02.017835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:02.017840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:02.017855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:02.017861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:02.017876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:02.017882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:02.017897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:02.017902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:02.017917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:02.017923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:02.017938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:02.017943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:02.017958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:02.017963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:02.017978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:02.017984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:02.017999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:02.018004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:02.018019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:02.018024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:02.018040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:02.018045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.630 12279.92 IOPS, 47.97 MiB/s [2024-12-06T12:34:17.289Z] 11335.31 IOPS, 44.28 MiB/s [2024-12-06T12:34:17.289Z] 10525.64 IOPS, 41.12 MiB/s [2024-12-06T12:34:17.289Z] 9920.33 IOPS, 38.75 MiB/s [2024-12-06T12:34:17.289Z] 10100.12 IOPS, 39.45 MiB/s [2024-12-06T12:34:17.289Z] 10261.53 IOPS, 40.08 MiB/s [2024-12-06T12:34:17.289Z] 10613.56 IOPS, 41.46 MiB/s [2024-12-06T12:34:17.289Z] 10939.11 IOPS, 42.73 MiB/s [2024-12-06T12:34:17.289Z] 11149.35 IOPS, 43.55 MiB/s [2024-12-06T12:34:17.289Z] 11234.38 IOPS, 43.88 MiB/s [2024-12-06T12:34:17.289Z] 11303.45 IOPS, 44.15 MiB/s [2024-12-06T12:34:17.289Z] 11513.70 IOPS, 44.98 MiB/s [2024-12-06T12:34:17.289Z] 11729.75 IOPS, 45.82 MiB/s [2024-12-06T12:34:17.289Z] [2024-12-06 13:34:14.764687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:14.764721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.630 [2024-12-06 13:34:14.764967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.764994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.764999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.766530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.766546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.766559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.630 [2024-12-06 13:34:14.766564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.630 [2024-12-06 13:34:14.766575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.631 [2024-12-06 13:34:14.766942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.631 [2024-12-06 13:34:14.766957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.631 [2024-12-06 13:34:14.766968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.632 [2024-12-06 13:34:14.766973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.632 [2024-12-06 13:34:14.766983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.632 [2024-12-06 13:34:14.766988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.632 [2024-12-06 13:34:14.766999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.632 [2024-12-06 13:34:14.767004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.632 11867.36 IOPS, 46.36 MiB/s [2024-12-06T12:34:17.291Z] 11907.88 IOPS, 46.52 MiB/s [2024-12-06T12:34:17.291Z] Received shutdown signal, test time was about 26.839850 seconds 00:26:30.632 00:26:30.632 Latency(us) 00:26:30.632 [2024-12-06T12:34:17.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.632 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:30.632 Verification LBA range: start 0x0 length 0x4000 00:26:30.632 Nvme0n1 : 26.84 11938.56 46.64 0.00 0.00 10700.99 349.87 3019898.88 00:26:30.632 [2024-12-06T12:34:17.291Z] =================================================================================================================== 00:26:30.632 [2024-12-06T12:34:17.291Z] Total : 11938.56 46.64 0.00 0.00 10700.99 349.87 3019898.88 00:26:30.632 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:30.893 rmmod nvme_tcp 00:26:30.893 rmmod nvme_fabrics 00:26:30.893 rmmod nvme_keyring 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2279156 ']' 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2279156 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2279156 ']' 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2279156 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279156 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279156' 00:26:30.893 killing process with pid 2279156 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2279156 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2279156 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:30.893 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:31.153 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:31.153 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:31.153 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:31.153 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:31.153 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:31.153 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.153 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.153 13:34:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.063 13:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:33.063 00:26:33.063 real 0m41.548s 00:26:33.063 user 1m47.388s 00:26:33.063 sys 0m11.690s 00:26:33.063 13:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.063 13:34:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:33.063 ************************************ 00:26:33.063 END TEST nvmf_host_multipath_status 00:26:33.063 ************************************ 00:26:33.063 13:34:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:33.063 13:34:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:33.063 13:34:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.063 13:34:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.063 ************************************ 00:26:33.063 START TEST nvmf_discovery_remove_ifc 00:26:33.063 ************************************ 00:26:33.064 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:33.325 * Looking for test storage... 00:26:33.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:33.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.325 --rc genhtml_branch_coverage=1 00:26:33.325 --rc genhtml_function_coverage=1 00:26:33.325 --rc genhtml_legend=1 00:26:33.325 --rc geninfo_all_blocks=1 00:26:33.325 --rc geninfo_unexecuted_blocks=1 00:26:33.325 00:26:33.325 ' 00:26:33.325 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:33.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.326 --rc genhtml_branch_coverage=1 00:26:33.326 --rc genhtml_function_coverage=1 00:26:33.326 --rc genhtml_legend=1 00:26:33.326 --rc geninfo_all_blocks=1 00:26:33.326 --rc geninfo_unexecuted_blocks=1 00:26:33.326 00:26:33.326 ' 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:33.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.326 --rc genhtml_branch_coverage=1 00:26:33.326 --rc genhtml_function_coverage=1 00:26:33.326 --rc genhtml_legend=1 00:26:33.326 --rc geninfo_all_blocks=1 00:26:33.326 --rc geninfo_unexecuted_blocks=1 00:26:33.326 00:26:33.326 ' 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:33.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.326 --rc genhtml_branch_coverage=1 00:26:33.326 --rc genhtml_function_coverage=1 00:26:33.326 --rc genhtml_legend=1 00:26:33.326 --rc geninfo_all_blocks=1 00:26:33.326 --rc geninfo_unexecuted_blocks=1 00:26:33.326 00:26:33.326 ' 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:33.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.326 13:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:41.458 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:41.458 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:41.458 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:41.458 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.458 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:41.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:26:41.459 00:26:41.459 --- 10.0.0.2 ping statistics --- 00:26:41.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.459 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:26:41.459 00:26:41.459 --- 10.0.0.1 ping statistics --- 00:26:41.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.459 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2289459 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2289459 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2289459 ']' 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.459 [2024-12-06 13:34:27.477423] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:26:41.459 [2024-12-06 13:34:27.477504] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.459 [2024-12-06 13:34:27.549531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.459 [2024-12-06 13:34:27.595934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.459 [2024-12-06 13:34:27.595977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.459 [2024-12-06 13:34:27.595983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.459 [2024-12-06 13:34:27.595989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.459 [2024-12-06 13:34:27.595993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.459 [2024-12-06 13:34:27.596535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.459 [2024-12-06 13:34:27.761068] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.459 [2024-12-06 13:34:27.769335] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:41.459 null0 00:26:41.459 [2024-12-06 13:34:27.801289] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2289574 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2289574 /tmp/host.sock 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2289574 ']' 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:41.459 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.459 13:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.459 [2024-12-06 13:34:27.881937] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:26:41.459 [2024-12-06 13:34:27.881998] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289574 ] 00:26:41.459 [2024-12-06 13:34:27.972847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.459 [2024-12-06 13:34:28.025491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.401 13:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.342 [2024-12-06 13:34:29.849695] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:43.342 [2024-12-06 13:34:29.849727] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:43.342 [2024-12-06 13:34:29.849748] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:43.342 [2024-12-06 13:34:29.937019] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:43.604 [2024-12-06 13:34:30.163513] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:43.604 [2024-12-06 13:34:30.165019] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a4eed0:1 started. 00:26:43.604 [2024-12-06 13:34:30.166925] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:43.604 [2024-12-06 13:34:30.166988] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:43.604 [2024-12-06 13:34:30.167017] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:43.604 [2024-12-06 13:34:30.167035] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:43.604 [2024-12-06 13:34:30.167067] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:43.604 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.604 [2024-12-06 13:34:30.169256] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a4eed0 was disconnected and freed. delete nvme_qpair. 00:26:43.604 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:43.604 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.604 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.604 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.604 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.604 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.604 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.604 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.604 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.604 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:43.604 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:43.604 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:43.882 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:43.882 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.882 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.882 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.882 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.882 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.882 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.882 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.882 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.882 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:43.882 13:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:44.954 13:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.954 13:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.954 13:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.954 13:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.954 13:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.954 13:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.955 13:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.955 13:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.955 13:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:44.955 13:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.928 13:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.928 13:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.928 13:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.928 13:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.928 13:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.928 13:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.928 13:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.928 13:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.928 13:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:45.928 13:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.307 13:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:47.307 13:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.307 13:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:47.307 13:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.307 13:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:47.307 13:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.307 13:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:47.307 13:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.307 13:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:47.307 13:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:48.245 13:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:48.245 13:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.245 13:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:48.245 13:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.245 13:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:48.245 13:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.245 13:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.245 13:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.245 13:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:48.245 13:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:49.187 [2024-12-06 13:34:35.616683] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:49.187 [2024-12-06 13:34:35.616719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.187 [2024-12-06 13:34:35.616728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.187 [2024-12-06 13:34:35.616735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.187 [2024-12-06 13:34:35.616741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.187 [2024-12-06 13:34:35.616746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.187 [2024-12-06 13:34:35.616755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.187 [2024-12-06 13:34:35.616761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.187 [2024-12-06 13:34:35.616766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.187 [2024-12-06 13:34:35.616772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:49.187 [2024-12-06 13:34:35.616777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:49.187 [2024-12-06 13:34:35.616782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b6d0 is same with the state(6) to be set 00:26:49.187 [2024-12-06 13:34:35.626705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2b6d0 (9): Bad file descriptor 00:26:49.187 [2024-12-06 13:34:35.636738] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:49.187 [2024-12-06 13:34:35.636746] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:49.187 [2024-12-06 13:34:35.636751] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.187 [2024-12-06 13:34:35.636755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.187 [2024-12-06 13:34:35.636772] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.187 13:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.187 13:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.187 13:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.187 13:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.187 13:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.187 13:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.187 13:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:50.129 [2024-12-06 13:34:36.676624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:50.129 [2024-12-06 13:34:36.676718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2b6d0 with addr=10.0.0.2, port=4420 00:26:50.129 [2024-12-06 13:34:36.676750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2b6d0 is same with the state(6) to be set 00:26:50.129 [2024-12-06 13:34:36.676805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2b6d0 (9): Bad file descriptor 00:26:50.129 [2024-12-06 13:34:36.677928] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:50.129 [2024-12-06 13:34:36.677999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:50.129 [2024-12-06 13:34:36.678022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:50.129 [2024-12-06 13:34:36.678046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:50.129 [2024-12-06 13:34:36.678066] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:50.129 [2024-12-06 13:34:36.678082] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:50.129 [2024-12-06 13:34:36.678096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:50.129 [2024-12-06 13:34:36.678130] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:50.129 [2024-12-06 13:34:36.678145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:50.129 13:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.129 13:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:50.129 13:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:51.071 [2024-12-06 13:34:37.680567] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:51.071 [2024-12-06 13:34:37.680581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:51.071 [2024-12-06 13:34:37.680590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:51.071 [2024-12-06 13:34:37.680596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:51.071 [2024-12-06 13:34:37.680602] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:51.071 [2024-12-06 13:34:37.680607] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:51.071 [2024-12-06 13:34:37.680611] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:51.071 [2024-12-06 13:34:37.680614] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:51.071 [2024-12-06 13:34:37.680633] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:51.071 [2024-12-06 13:34:37.680652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.071 [2024-12-06 13:34:37.680660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.071 [2024-12-06 13:34:37.680668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.071 [2024-12-06 13:34:37.680673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.072 [2024-12-06 13:34:37.680679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.072 [2024-12-06 13:34:37.680684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.072 [2024-12-06 13:34:37.680690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.072 [2024-12-06 13:34:37.680695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.072 [2024-12-06 13:34:37.680701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:51.072 [2024-12-06 13:34:37.680706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.072 [2024-12-06 13:34:37.680712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:51.072 [2024-12-06 13:34:37.681009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1adf0 (9): Bad file descriptor 00:26:51.072 [2024-12-06 13:34:37.682021] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:51.072 [2024-12-06 13:34:37.682030] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:51.072 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.072 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.072 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.072 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.072 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.072 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.072 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.072 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:51.333 13:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:52.276 13:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.276 13:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.276 13:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.276 13:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.276 13:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.276 13:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.276 13:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.276 13:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.537 13:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:52.537 13:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.108 [2024-12-06 13:34:39.735394] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:53.108 [2024-12-06 13:34:39.735408] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:53.108 [2024-12-06 13:34:39.735418] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:53.368 [2024-12-06 13:34:39.864789] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:53.368 13:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:53.368 13:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.368 13:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:53.368 13:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.368 13:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:53.368 13:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:53.368 13:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:53.368 13:34:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.368 13:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:53.368 13:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.629 [2024-12-06 13:34:40.047685] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:53.629 [2024-12-06 13:34:40.048475] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1a58640:1 started. 00:26:53.629 [2024-12-06 13:34:40.049383] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:53.629 [2024-12-06 13:34:40.049410] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:53.629 [2024-12-06 13:34:40.049424] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:53.629 [2024-12-06 13:34:40.049435] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:53.629 [2024-12-06 13:34:40.049442] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:53.629 [2024-12-06 13:34:40.053504] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1a58640 was disconnected and freed. delete nvme_qpair. 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2289574 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2289574 ']' 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2289574 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2289574 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2289574' 00:26:54.572 killing process with pid 2289574 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2289574 00:26:54.572 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2289574 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:54.833 rmmod nvme_tcp 00:26:54.833 rmmod nvme_fabrics 00:26:54.833 rmmod nvme_keyring 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2289459 ']' 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2289459 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2289459 ']' 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2289459 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2289459 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2289459' 00:26:54.833 killing process with pid 2289459 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2289459 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2289459 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.833 13:34:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:57.382 00:26:57.382 real 0m23.832s 00:26:57.382 user 0m28.813s 00:26:57.382 sys 0m7.175s 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.382 ************************************ 00:26:57.382 END TEST nvmf_discovery_remove_ifc 00:26:57.382 ************************************ 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.382 ************************************ 00:26:57.382 START TEST nvmf_identify_kernel_target 00:26:57.382 ************************************ 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:57.382 * Looking for test storage... 00:26:57.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:57.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.382 --rc genhtml_branch_coverage=1 00:26:57.382 --rc genhtml_function_coverage=1 00:26:57.382 --rc genhtml_legend=1 00:26:57.382 --rc geninfo_all_blocks=1 00:26:57.382 --rc geninfo_unexecuted_blocks=1 00:26:57.382 00:26:57.382 ' 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:57.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.382 --rc genhtml_branch_coverage=1 00:26:57.382 --rc genhtml_function_coverage=1 00:26:57.382 --rc genhtml_legend=1 00:26:57.382 --rc geninfo_all_blocks=1 00:26:57.382 --rc geninfo_unexecuted_blocks=1 00:26:57.382 00:26:57.382 ' 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:57.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.382 --rc genhtml_branch_coverage=1 00:26:57.382 --rc genhtml_function_coverage=1 00:26:57.382 --rc genhtml_legend=1 00:26:57.382 --rc geninfo_all_blocks=1 00:26:57.382 --rc geninfo_unexecuted_blocks=1 00:26:57.382 00:26:57.382 ' 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:57.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:57.382 --rc genhtml_branch_coverage=1 00:26:57.382 --rc genhtml_function_coverage=1 00:26:57.382 --rc genhtml_legend=1 00:26:57.382 --rc geninfo_all_blocks=1 00:26:57.382 --rc geninfo_unexecuted_blocks=1 00:26:57.382 00:26:57.382 ' 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.382 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:57.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:57.383 13:34:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.520 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:05.521 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:05.521 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:05.521 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:05.521 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:05.521 13:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:05.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:27:05.521 00:27:05.521 --- 10.0.0.2 ping statistics --- 00:27:05.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.521 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:27:05.521 00:27:05.521 --- 10.0.0.1 ping statistics --- 00:27:05.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.521 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:05.521 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:05.522 13:34:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:08.819 Waiting for block devices as requested 00:27:08.819 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:08.819 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:08.819 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:08.819 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:08.819 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:08.819 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:08.819 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:08.819 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:09.078 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:09.078 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:09.078 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:09.337 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:09.337 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:09.337 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:09.597 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:09.597 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:09.597 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:09.858 No valid GPT data, bailing 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:09.858 00:27:09.858 Discovery Log Number of Records 2, Generation counter 2 00:27:09.858 =====Discovery Log Entry 0====== 00:27:09.858 trtype: tcp 00:27:09.858 adrfam: ipv4 00:27:09.858 subtype: current discovery subsystem 00:27:09.858 treq: not specified, sq flow control disable supported 00:27:09.858 portid: 1 00:27:09.858 trsvcid: 4420 00:27:09.858 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:09.858 traddr: 10.0.0.1 00:27:09.858 eflags: none 00:27:09.858 sectype: none 00:27:09.858 =====Discovery Log Entry 1====== 00:27:09.858 trtype: tcp 00:27:09.858 adrfam: ipv4 00:27:09.858 subtype: nvme subsystem 00:27:09.858 treq: not specified, sq flow control disable supported 00:27:09.858 portid: 1 00:27:09.858 trsvcid: 4420 00:27:09.858 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:09.858 traddr: 10.0.0.1 00:27:09.858 eflags: none 00:27:09.858 sectype: none 00:27:09.858 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:09.858 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:09.858 ===================================================== 00:27:09.859 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:09.859 ===================================================== 00:27:09.859 Controller Capabilities/Features 00:27:09.859 ================================ 00:27:09.859 Vendor ID: 0000 00:27:09.859 Subsystem Vendor ID: 0000 00:27:09.859 Serial Number: 80e079ccb22ce6966427 00:27:09.859 Model Number: Linux 00:27:09.859 Firmware Version: 6.8.9-20 00:27:09.859 Recommended Arb Burst: 0 00:27:09.859 IEEE OUI Identifier: 00 00 00 00:27:09.859 Multi-path I/O 00:27:09.859 May have multiple subsystem ports: No 00:27:09.859 May have multiple controllers: No 00:27:09.859 Associated with SR-IOV VF: No 00:27:09.859 Max Data Transfer Size: Unlimited 00:27:09.859 Max Number of Namespaces: 0 00:27:09.859 Max Number of I/O Queues: 1024 00:27:09.859 NVMe Specification Version (VS): 1.3 00:27:09.859 NVMe Specification Version (Identify): 1.3 00:27:09.859 Maximum Queue Entries: 1024 00:27:09.859 Contiguous Queues Required: No 00:27:09.859 Arbitration Mechanisms Supported 00:27:09.859 Weighted Round Robin: Not Supported 00:27:09.859 Vendor Specific: Not Supported 00:27:09.859 Reset Timeout: 7500 ms 00:27:09.859 Doorbell Stride: 4 bytes 00:27:09.859 NVM Subsystem Reset: Not Supported 00:27:09.859 Command Sets Supported 00:27:09.859 NVM Command Set: Supported 00:27:09.859 Boot Partition: Not Supported 00:27:09.859 Memory Page Size Minimum: 4096 bytes 00:27:09.859 Memory Page Size Maximum: 4096 bytes 00:27:09.859 Persistent Memory Region: Not Supported 00:27:09.859 Optional Asynchronous Events Supported 00:27:09.859 Namespace Attribute Notices: Not Supported 00:27:09.859 Firmware Activation Notices: Not Supported 00:27:09.859 ANA Change Notices: Not Supported 00:27:09.859 PLE Aggregate Log Change Notices: Not Supported 00:27:09.859 LBA Status Info Alert Notices: Not Supported 00:27:09.859 EGE Aggregate Log Change Notices: Not Supported 00:27:09.859 Normal NVM Subsystem Shutdown event: Not Supported 00:27:09.859 Zone Descriptor Change Notices: Not Supported 00:27:09.859 Discovery Log Change Notices: Supported 00:27:09.859 Controller Attributes 00:27:09.859 128-bit Host Identifier: Not Supported 00:27:09.859 Non-Operational Permissive Mode: Not Supported 00:27:09.859 NVM Sets: Not Supported 00:27:09.859 Read Recovery Levels: Not Supported 00:27:09.859 Endurance Groups: Not Supported 00:27:09.859 Predictable Latency Mode: Not Supported 00:27:09.859 Traffic Based Keep ALive: Not Supported 00:27:09.859 Namespace Granularity: Not Supported 00:27:09.859 SQ Associations: Not Supported 00:27:09.859 UUID List: Not Supported 00:27:09.859 Multi-Domain Subsystem: Not Supported 00:27:09.859 Fixed Capacity Management: Not Supported 00:27:09.859 Variable Capacity Management: Not Supported 00:27:09.859 Delete Endurance Group: Not Supported 00:27:09.859 Delete NVM Set: Not Supported 00:27:09.859 Extended LBA Formats Supported: Not Supported 00:27:09.859 Flexible Data Placement Supported: Not Supported 00:27:09.859 00:27:09.859 Controller Memory Buffer Support 00:27:09.859 ================================ 00:27:09.859 Supported: No 00:27:09.859 00:27:09.859 Persistent Memory Region Support 00:27:09.859 ================================ 00:27:09.859 Supported: No 00:27:09.859 00:27:09.859 Admin Command Set Attributes 00:27:09.859 ============================ 00:27:09.859 Security Send/Receive: Not Supported 00:27:09.859 Format NVM: Not Supported 00:27:09.859 Firmware Activate/Download: Not Supported 00:27:09.859 Namespace Management: Not Supported 00:27:09.859 Device Self-Test: Not Supported 00:27:09.859 Directives: Not Supported 00:27:09.859 NVMe-MI: Not Supported 00:27:09.859 Virtualization Management: Not Supported 00:27:09.859 Doorbell Buffer Config: Not Supported 00:27:09.859 Get LBA Status Capability: Not Supported 00:27:09.859 Command & Feature Lockdown Capability: Not Supported 00:27:09.859 Abort Command Limit: 1 00:27:09.859 Async Event Request Limit: 1 00:27:09.859 Number of Firmware Slots: N/A 00:27:09.859 Firmware Slot 1 Read-Only: N/A 00:27:09.859 Firmware Activation Without Reset: N/A 00:27:09.859 Multiple Update Detection Support: N/A 00:27:09.859 Firmware Update Granularity: No Information Provided 00:27:09.859 Per-Namespace SMART Log: No 00:27:09.859 Asymmetric Namespace Access Log Page: Not Supported 00:27:09.859 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:09.859 Command Effects Log Page: Not Supported 00:27:09.859 Get Log Page Extended Data: Supported 00:27:09.859 Telemetry Log Pages: Not Supported 00:27:09.859 Persistent Event Log Pages: Not Supported 00:27:09.859 Supported Log Pages Log Page: May Support 00:27:09.859 Commands Supported & Effects Log Page: Not Supported 00:27:09.859 Feature Identifiers & Effects Log Page:May Support 00:27:09.859 NVMe-MI Commands & Effects Log Page: May Support 00:27:09.859 Data Area 4 for Telemetry Log: Not Supported 00:27:09.859 Error Log Page Entries Supported: 1 00:27:09.859 Keep Alive: Not Supported 00:27:09.859 00:27:09.859 NVM Command Set Attributes 00:27:09.859 ========================== 00:27:09.859 Submission Queue Entry Size 00:27:09.859 Max: 1 00:27:09.859 Min: 1 00:27:09.859 Completion Queue Entry Size 00:27:09.859 Max: 1 00:27:09.859 Min: 1 00:27:09.859 Number of Namespaces: 0 00:27:09.859 Compare Command: Not Supported 00:27:09.859 Write Uncorrectable Command: Not Supported 00:27:09.859 Dataset Management Command: Not Supported 00:27:09.859 Write Zeroes Command: Not Supported 00:27:09.859 Set Features Save Field: Not Supported 00:27:09.859 Reservations: Not Supported 00:27:09.859 Timestamp: Not Supported 00:27:09.859 Copy: Not Supported 00:27:09.859 Volatile Write Cache: Not Present 00:27:09.859 Atomic Write Unit (Normal): 1 00:27:09.859 Atomic Write Unit (PFail): 1 00:27:09.859 Atomic Compare & Write Unit: 1 00:27:09.859 Fused Compare & Write: Not Supported 00:27:09.859 Scatter-Gather List 00:27:09.859 SGL Command Set: Supported 00:27:09.859 SGL Keyed: Not Supported 00:27:09.859 SGL Bit Bucket Descriptor: Not Supported 00:27:09.859 SGL Metadata Pointer: Not Supported 00:27:09.859 Oversized SGL: Not Supported 00:27:09.859 SGL Metadata Address: Not Supported 00:27:09.859 SGL Offset: Supported 00:27:09.859 Transport SGL Data Block: Not Supported 00:27:09.859 Replay Protected Memory Block: Not Supported 00:27:09.859 00:27:09.859 Firmware Slot Information 00:27:09.859 ========================= 00:27:09.859 Active slot: 0 00:27:09.859 00:27:09.859 00:27:09.859 Error Log 00:27:09.859 ========= 00:27:09.859 00:27:09.859 Active Namespaces 00:27:09.859 ================= 00:27:09.859 Discovery Log Page 00:27:09.859 ================== 00:27:09.859 Generation Counter: 2 00:27:09.859 Number of Records: 2 00:27:09.859 Record Format: 0 00:27:09.859 00:27:09.859 Discovery Log Entry 0 00:27:09.859 ---------------------- 00:27:09.859 Transport Type: 3 (TCP) 00:27:09.859 Address Family: 1 (IPv4) 00:27:09.859 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:09.859 Entry Flags: 00:27:09.859 Duplicate Returned Information: 0 00:27:09.859 Explicit Persistent Connection Support for Discovery: 0 00:27:09.859 Transport Requirements: 00:27:09.859 Secure Channel: Not Specified 00:27:09.859 Port ID: 1 (0x0001) 00:27:09.859 Controller ID: 65535 (0xffff) 00:27:09.859 Admin Max SQ Size: 32 00:27:09.859 Transport Service Identifier: 4420 00:27:09.859 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:09.859 Transport Address: 10.0.0.1 00:27:09.859 Discovery Log Entry 1 00:27:09.859 ---------------------- 00:27:09.859 Transport Type: 3 (TCP) 00:27:09.859 Address Family: 1 (IPv4) 00:27:09.859 Subsystem Type: 2 (NVM Subsystem) 00:27:09.859 Entry Flags: 00:27:09.859 Duplicate Returned Information: 0 00:27:09.859 Explicit Persistent Connection Support for Discovery: 0 00:27:09.859 Transport Requirements: 00:27:09.859 Secure Channel: Not Specified 00:27:09.859 Port ID: 1 (0x0001) 00:27:09.859 Controller ID: 65535 (0xffff) 00:27:09.859 Admin Max SQ Size: 32 00:27:09.859 Transport Service Identifier: 4420 00:27:09.859 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:09.859 Transport Address: 10.0.0.1 00:27:09.859 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:10.120 get_feature(0x01) failed 00:27:10.120 get_feature(0x02) failed 00:27:10.120 get_feature(0x04) failed 00:27:10.120 ===================================================== 00:27:10.120 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:10.120 ===================================================== 00:27:10.120 Controller Capabilities/Features 00:27:10.120 ================================ 00:27:10.120 Vendor ID: 0000 00:27:10.120 Subsystem Vendor ID: 0000 00:27:10.120 Serial Number: aa1a3a59b20df62a92f7 00:27:10.120 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:10.120 Firmware Version: 6.8.9-20 00:27:10.120 Recommended Arb Burst: 6 00:27:10.120 IEEE OUI Identifier: 00 00 00 00:27:10.120 Multi-path I/O 00:27:10.120 May have multiple subsystem ports: Yes 00:27:10.120 May have multiple controllers: Yes 00:27:10.120 Associated with SR-IOV VF: No 00:27:10.120 Max Data Transfer Size: Unlimited 00:27:10.120 Max Number of Namespaces: 1024 00:27:10.120 Max Number of I/O Queues: 128 00:27:10.120 NVMe Specification Version (VS): 1.3 00:27:10.120 NVMe Specification Version (Identify): 1.3 00:27:10.120 Maximum Queue Entries: 1024 00:27:10.120 Contiguous Queues Required: No 00:27:10.120 Arbitration Mechanisms Supported 00:27:10.120 Weighted Round Robin: Not Supported 00:27:10.120 Vendor Specific: Not Supported 00:27:10.120 Reset Timeout: 7500 ms 00:27:10.120 Doorbell Stride: 4 bytes 00:27:10.120 NVM Subsystem Reset: Not Supported 00:27:10.120 Command Sets Supported 00:27:10.120 NVM Command Set: Supported 00:27:10.120 Boot Partition: Not Supported 00:27:10.120 Memory Page Size Minimum: 4096 bytes 00:27:10.120 Memory Page Size Maximum: 4096 bytes 00:27:10.120 Persistent Memory Region: Not Supported 00:27:10.120 Optional Asynchronous Events Supported 00:27:10.120 Namespace Attribute Notices: Supported 00:27:10.120 Firmware Activation Notices: Not Supported 00:27:10.120 ANA Change Notices: Supported 00:27:10.120 PLE Aggregate Log Change Notices: Not Supported 00:27:10.120 LBA Status Info Alert Notices: Not Supported 00:27:10.120 EGE Aggregate Log Change Notices: Not Supported 00:27:10.120 Normal NVM Subsystem Shutdown event: Not Supported 00:27:10.120 Zone Descriptor Change Notices: Not Supported 00:27:10.120 Discovery Log Change Notices: Not Supported 00:27:10.120 Controller Attributes 00:27:10.120 128-bit Host Identifier: Supported 00:27:10.120 Non-Operational Permissive Mode: Not Supported 00:27:10.120 NVM Sets: Not Supported 00:27:10.120 Read Recovery Levels: Not Supported 00:27:10.120 Endurance Groups: Not Supported 00:27:10.120 Predictable Latency Mode: Not Supported 00:27:10.120 Traffic Based Keep ALive: Supported 00:27:10.120 Namespace Granularity: Not Supported 00:27:10.120 SQ Associations: Not Supported 00:27:10.120 UUID List: Not Supported 00:27:10.120 Multi-Domain Subsystem: Not Supported 00:27:10.120 Fixed Capacity Management: Not Supported 00:27:10.120 Variable Capacity Management: Not Supported 00:27:10.120 Delete Endurance Group: Not Supported 00:27:10.120 Delete NVM Set: Not Supported 00:27:10.120 Extended LBA Formats Supported: Not Supported 00:27:10.120 Flexible Data Placement Supported: Not Supported 00:27:10.120 00:27:10.120 Controller Memory Buffer Support 00:27:10.120 ================================ 00:27:10.120 Supported: No 00:27:10.120 00:27:10.120 Persistent Memory Region Support 00:27:10.120 ================================ 00:27:10.120 Supported: No 00:27:10.120 00:27:10.120 Admin Command Set Attributes 00:27:10.120 ============================ 00:27:10.120 Security Send/Receive: Not Supported 00:27:10.120 Format NVM: Not Supported 00:27:10.120 Firmware Activate/Download: Not Supported 00:27:10.120 Namespace Management: Not Supported 00:27:10.120 Device Self-Test: Not Supported 00:27:10.120 Directives: Not Supported 00:27:10.120 NVMe-MI: Not Supported 00:27:10.120 Virtualization Management: Not Supported 00:27:10.120 Doorbell Buffer Config: Not Supported 00:27:10.120 Get LBA Status Capability: Not Supported 00:27:10.120 Command & Feature Lockdown Capability: Not Supported 00:27:10.120 Abort Command Limit: 4 00:27:10.120 Async Event Request Limit: 4 00:27:10.120 Number of Firmware Slots: N/A 00:27:10.120 Firmware Slot 1 Read-Only: N/A 00:27:10.120 Firmware Activation Without Reset: N/A 00:27:10.120 Multiple Update Detection Support: N/A 00:27:10.120 Firmware Update Granularity: No Information Provided 00:27:10.120 Per-Namespace SMART Log: Yes 00:27:10.120 Asymmetric Namespace Access Log Page: Supported 00:27:10.120 ANA Transition Time : 10 sec 00:27:10.120 00:27:10.120 Asymmetric Namespace Access Capabilities 00:27:10.120 ANA Optimized State : Supported 00:27:10.120 ANA Non-Optimized State : Supported 00:27:10.120 ANA Inaccessible State : Supported 00:27:10.120 ANA Persistent Loss State : Supported 00:27:10.120 ANA Change State : Supported 00:27:10.120 ANAGRPID is not changed : No 00:27:10.120 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:10.120 00:27:10.120 ANA Group Identifier Maximum : 128 00:27:10.120 Number of ANA Group Identifiers : 128 00:27:10.120 Max Number of Allowed Namespaces : 1024 00:27:10.120 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:10.120 Command Effects Log Page: Supported 00:27:10.120 Get Log Page Extended Data: Supported 00:27:10.120 Telemetry Log Pages: Not Supported 00:27:10.120 Persistent Event Log Pages: Not Supported 00:27:10.120 Supported Log Pages Log Page: May Support 00:27:10.120 Commands Supported & Effects Log Page: Not Supported 00:27:10.120 Feature Identifiers & Effects Log Page:May Support 00:27:10.120 NVMe-MI Commands & Effects Log Page: May Support 00:27:10.120 Data Area 4 for Telemetry Log: Not Supported 00:27:10.120 Error Log Page Entries Supported: 128 00:27:10.120 Keep Alive: Supported 00:27:10.120 Keep Alive Granularity: 1000 ms 00:27:10.120 00:27:10.120 NVM Command Set Attributes 00:27:10.120 ========================== 00:27:10.120 Submission Queue Entry Size 00:27:10.120 Max: 64 00:27:10.120 Min: 64 00:27:10.120 Completion Queue Entry Size 00:27:10.120 Max: 16 00:27:10.120 Min: 16 00:27:10.120 Number of Namespaces: 1024 00:27:10.120 Compare Command: Not Supported 00:27:10.120 Write Uncorrectable Command: Not Supported 00:27:10.120 Dataset Management Command: Supported 00:27:10.120 Write Zeroes Command: Supported 00:27:10.120 Set Features Save Field: Not Supported 00:27:10.120 Reservations: Not Supported 00:27:10.121 Timestamp: Not Supported 00:27:10.121 Copy: Not Supported 00:27:10.121 Volatile Write Cache: Present 00:27:10.121 Atomic Write Unit (Normal): 1 00:27:10.121 Atomic Write Unit (PFail): 1 00:27:10.121 Atomic Compare & Write Unit: 1 00:27:10.121 Fused Compare & Write: Not Supported 00:27:10.121 Scatter-Gather List 00:27:10.121 SGL Command Set: Supported 00:27:10.121 SGL Keyed: Not Supported 00:27:10.121 SGL Bit Bucket Descriptor: Not Supported 00:27:10.121 SGL Metadata Pointer: Not Supported 00:27:10.121 Oversized SGL: Not Supported 00:27:10.121 SGL Metadata Address: Not Supported 00:27:10.121 SGL Offset: Supported 00:27:10.121 Transport SGL Data Block: Not Supported 00:27:10.121 Replay Protected Memory Block: Not Supported 00:27:10.121 00:27:10.121 Firmware Slot Information 00:27:10.121 ========================= 00:27:10.121 Active slot: 0 00:27:10.121 00:27:10.121 Asymmetric Namespace Access 00:27:10.121 =========================== 00:27:10.121 Change Count : 0 00:27:10.121 Number of ANA Group Descriptors : 1 00:27:10.121 ANA Group Descriptor : 0 00:27:10.121 ANA Group ID : 1 00:27:10.121 Number of NSID Values : 1 00:27:10.121 Change Count : 0 00:27:10.121 ANA State : 1 00:27:10.121 Namespace Identifier : 1 00:27:10.121 00:27:10.121 Commands Supported and Effects 00:27:10.121 ============================== 00:27:10.121 Admin Commands 00:27:10.121 -------------- 00:27:10.121 Get Log Page (02h): Supported 00:27:10.121 Identify (06h): Supported 00:27:10.121 Abort (08h): Supported 00:27:10.121 Set Features (09h): Supported 00:27:10.121 Get Features (0Ah): Supported 00:27:10.121 Asynchronous Event Request (0Ch): Supported 00:27:10.121 Keep Alive (18h): Supported 00:27:10.121 I/O Commands 00:27:10.121 ------------ 00:27:10.121 Flush (00h): Supported 00:27:10.121 Write (01h): Supported LBA-Change 00:27:10.121 Read (02h): Supported 00:27:10.121 Write Zeroes (08h): Supported LBA-Change 00:27:10.121 Dataset Management (09h): Supported 00:27:10.121 00:27:10.121 Error Log 00:27:10.121 ========= 00:27:10.121 Entry: 0 00:27:10.121 Error Count: 0x3 00:27:10.121 Submission Queue Id: 0x0 00:27:10.121 Command Id: 0x5 00:27:10.121 Phase Bit: 0 00:27:10.121 Status Code: 0x2 00:27:10.121 Status Code Type: 0x0 00:27:10.121 Do Not Retry: 1 00:27:10.121 Error Location: 0x28 00:27:10.121 LBA: 0x0 00:27:10.121 Namespace: 0x0 00:27:10.121 Vendor Log Page: 0x0 00:27:10.121 ----------- 00:27:10.121 Entry: 1 00:27:10.121 Error Count: 0x2 00:27:10.121 Submission Queue Id: 0x0 00:27:10.121 Command Id: 0x5 00:27:10.121 Phase Bit: 0 00:27:10.121 Status Code: 0x2 00:27:10.121 Status Code Type: 0x0 00:27:10.121 Do Not Retry: 1 00:27:10.121 Error Location: 0x28 00:27:10.121 LBA: 0x0 00:27:10.121 Namespace: 0x0 00:27:10.121 Vendor Log Page: 0x0 00:27:10.121 ----------- 00:27:10.121 Entry: 2 00:27:10.121 Error Count: 0x1 00:27:10.121 Submission Queue Id: 0x0 00:27:10.121 Command Id: 0x4 00:27:10.121 Phase Bit: 0 00:27:10.121 Status Code: 0x2 00:27:10.121 Status Code Type: 0x0 00:27:10.121 Do Not Retry: 1 00:27:10.121 Error Location: 0x28 00:27:10.121 LBA: 0x0 00:27:10.121 Namespace: 0x0 00:27:10.121 Vendor Log Page: 0x0 00:27:10.121 00:27:10.121 Number of Queues 00:27:10.121 ================ 00:27:10.121 Number of I/O Submission Queues: 128 00:27:10.121 Number of I/O Completion Queues: 128 00:27:10.121 00:27:10.121 ZNS Specific Controller Data 00:27:10.121 ============================ 00:27:10.121 Zone Append Size Limit: 0 00:27:10.121 00:27:10.121 00:27:10.121 Active Namespaces 00:27:10.121 ================= 00:27:10.121 get_feature(0x05) failed 00:27:10.121 Namespace ID:1 00:27:10.121 Command Set Identifier: NVM (00h) 00:27:10.121 Deallocate: Supported 00:27:10.121 Deallocated/Unwritten Error: Not Supported 00:27:10.121 Deallocated Read Value: Unknown 00:27:10.121 Deallocate in Write Zeroes: Not Supported 00:27:10.121 Deallocated Guard Field: 0xFFFF 00:27:10.121 Flush: Supported 00:27:10.121 Reservation: Not Supported 00:27:10.121 Namespace Sharing Capabilities: Multiple Controllers 00:27:10.121 Size (in LBAs): 3750748848 (1788GiB) 00:27:10.121 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:10.121 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:10.121 UUID: 208ee6f8-de8d-4dee-82e5-8441f3ffc934 00:27:10.121 Thin Provisioning: Not Supported 00:27:10.121 Per-NS Atomic Units: Yes 00:27:10.121 Atomic Write Unit (Normal): 8 00:27:10.121 Atomic Write Unit (PFail): 8 00:27:10.121 Preferred Write Granularity: 8 00:27:10.121 Atomic Compare & Write Unit: 8 00:27:10.121 Atomic Boundary Size (Normal): 0 00:27:10.121 Atomic Boundary Size (PFail): 0 00:27:10.121 Atomic Boundary Offset: 0 00:27:10.121 NGUID/EUI64 Never Reused: No 00:27:10.121 ANA group ID: 1 00:27:10.121 Namespace Write Protected: No 00:27:10.121 Number of LBA Formats: 1 00:27:10.121 Current LBA Format: LBA Format #00 00:27:10.121 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:10.121 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:10.121 rmmod nvme_tcp 00:27:10.121 rmmod nvme_fabrics 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.121 13:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.033 13:34:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:12.033 13:34:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:12.033 13:34:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:12.033 13:34:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:12.294 13:34:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:12.295 13:34:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:12.295 13:34:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:12.295 13:34:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:12.295 13:34:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:12.295 13:34:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:12.295 13:34:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:15.616 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:15.616 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:15.616 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:15.616 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:15.616 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:15.616 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:15.616 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:15.616 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:15.616 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:15.616 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:15.616 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:15.616 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:15.616 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:15.876 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:15.876 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:15.876 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:15.876 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:15.876 00:27:15.876 real 0m18.856s 00:27:15.876 user 0m5.164s 00:27:15.876 sys 0m10.777s 00:27:15.876 13:35:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:15.876 13:35:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:15.876 ************************************ 00:27:15.876 END TEST nvmf_identify_kernel_target 00:27:15.876 ************************************ 00:27:15.876 13:35:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:15.876 13:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:15.876 13:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:15.876 13:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.137 ************************************ 00:27:16.137 START TEST nvmf_auth_host 00:27:16.137 ************************************ 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:16.137 * Looking for test storage... 00:27:16.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:16.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.137 --rc genhtml_branch_coverage=1 00:27:16.137 --rc genhtml_function_coverage=1 00:27:16.137 --rc genhtml_legend=1 00:27:16.137 --rc geninfo_all_blocks=1 00:27:16.137 --rc geninfo_unexecuted_blocks=1 00:27:16.137 00:27:16.137 ' 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:16.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.137 --rc genhtml_branch_coverage=1 00:27:16.137 --rc genhtml_function_coverage=1 00:27:16.137 --rc genhtml_legend=1 00:27:16.137 --rc geninfo_all_blocks=1 00:27:16.137 --rc geninfo_unexecuted_blocks=1 00:27:16.137 00:27:16.137 ' 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:16.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.137 --rc genhtml_branch_coverage=1 00:27:16.137 --rc genhtml_function_coverage=1 00:27:16.137 --rc genhtml_legend=1 00:27:16.137 --rc geninfo_all_blocks=1 00:27:16.137 --rc geninfo_unexecuted_blocks=1 00:27:16.137 00:27:16.137 ' 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:16.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.137 --rc genhtml_branch_coverage=1 00:27:16.137 --rc genhtml_function_coverage=1 00:27:16.137 --rc genhtml_legend=1 00:27:16.137 --rc geninfo_all_blocks=1 00:27:16.137 --rc geninfo_unexecuted_blocks=1 00:27:16.137 00:27:16.137 ' 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.137 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:16.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:16.398 13:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.530 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:24.530 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:24.531 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:24.531 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:24.531 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.531 13:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:24.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:27:24.531 00:27:24.531 --- 10.0.0.2 ping statistics --- 00:27:24.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.531 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:27:24.531 00:27:24.531 --- 10.0.0.1 ping statistics --- 00:27:24.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.531 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2304005 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2304005 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2304005 ']' 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:24.531 13:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cac698353c4bcadb8cbde29dbf4b967c 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.QwY 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cac698353c4bcadb8cbde29dbf4b967c 0 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cac698353c4bcadb8cbde29dbf4b967c 0 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cac698353c4bcadb8cbde29dbf4b967c 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.QwY 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.QwY 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.QwY 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=24ce39f17bbea35943e802dff2b0d1d7c5e44a848df60a7b0be3d2b97a2bdef7 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3mQ 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 24ce39f17bbea35943e802dff2b0d1d7c5e44a848df60a7b0be3d2b97a2bdef7 3 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 24ce39f17bbea35943e802dff2b0d1d7c5e44a848df60a7b0be3d2b97a2bdef7 3 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=24ce39f17bbea35943e802dff2b0d1d7c5e44a848df60a7b0be3d2b97a2bdef7 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3mQ 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3mQ 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.3mQ 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f52cdad6c0d48acf26406fb5023b70186cf61887f843cef0 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.T2P 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f52cdad6c0d48acf26406fb5023b70186cf61887f843cef0 0 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f52cdad6c0d48acf26406fb5023b70186cf61887f843cef0 0 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f52cdad6c0d48acf26406fb5023b70186cf61887f843cef0 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.T2P 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.T2P 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.T2P 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:24.792 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cdec29e8e5f895897e263fc810e70251c317e8079002bc88 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Bdh 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cdec29e8e5f895897e263fc810e70251c317e8079002bc88 2 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cdec29e8e5f895897e263fc810e70251c317e8079002bc88 2 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cdec29e8e5f895897e263fc810e70251c317e8079002bc88 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Bdh 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Bdh 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Bdh 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d2b4be9aa1693069a12da03e02071b36 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5AI 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d2b4be9aa1693069a12da03e02071b36 1 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d2b4be9aa1693069a12da03e02071b36 1 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d2b4be9aa1693069a12da03e02071b36 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5AI 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5AI 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.5AI 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b2781ebad9c5ab9f368074b49dd9dd6d 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.LvK 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b2781ebad9c5ab9f368074b49dd9dd6d 1 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b2781ebad9c5ab9f368074b49dd9dd6d 1 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b2781ebad9c5ab9f368074b49dd9dd6d 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.LvK 00:27:25.052 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.LvK 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.LvK 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1ba6625350c5290e1cddfa34d0fcba2390127289af3a7c50 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kH5 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1ba6625350c5290e1cddfa34d0fcba2390127289af3a7c50 2 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1ba6625350c5290e1cddfa34d0fcba2390127289af3a7c50 2 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1ba6625350c5290e1cddfa34d0fcba2390127289af3a7c50 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.053 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kH5 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kH5 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.kH5 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3666d5828ecff9a057940c14d743a197 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.gg8 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3666d5828ecff9a057940c14d743a197 0 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3666d5828ecff9a057940c14d743a197 0 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3666d5828ecff9a057940c14d743a197 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.gg8 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.gg8 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.gg8 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=28d9ee698cd5aa92dab2dbaf4a8eb82080c8d23308cce001de2c43219223dfbe 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.STH 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 28d9ee698cd5aa92dab2dbaf4a8eb82080c8d23308cce001de2c43219223dfbe 3 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 28d9ee698cd5aa92dab2dbaf4a8eb82080c8d23308cce001de2c43219223dfbe 3 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=28d9ee698cd5aa92dab2dbaf4a8eb82080c8d23308cce001de2c43219223dfbe 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.STH 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.STH 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.STH 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2304005 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2304005 ']' 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.313 13:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QwY 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.3mQ ]] 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3mQ 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.T2P 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Bdh ]] 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bdh 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.5AI 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.LvK ]] 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LvK 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.kH5 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.574 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.gg8 ]] 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.gg8 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.STH 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:25.575 13:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:28.874 Waiting for block devices as requested 00:27:29.133 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:29.133 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:29.133 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:29.393 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:29.393 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:29.393 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:29.651 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:29.651 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:29.651 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:29.910 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:29.910 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:29.910 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:29.910 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:30.170 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:30.170 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:30.170 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:30.429 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:30.999 No valid GPT data, bailing 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:30.999 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:31.000 00:27:31.000 Discovery Log Number of Records 2, Generation counter 2 00:27:31.000 =====Discovery Log Entry 0====== 00:27:31.000 trtype: tcp 00:27:31.000 adrfam: ipv4 00:27:31.000 subtype: current discovery subsystem 00:27:31.000 treq: not specified, sq flow control disable supported 00:27:31.000 portid: 1 00:27:31.000 trsvcid: 4420 00:27:31.000 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:31.000 traddr: 10.0.0.1 00:27:31.000 eflags: none 00:27:31.000 sectype: none 00:27:31.000 =====Discovery Log Entry 1====== 00:27:31.000 trtype: tcp 00:27:31.000 adrfam: ipv4 00:27:31.000 subtype: nvme subsystem 00:27:31.000 treq: not specified, sq flow control disable supported 00:27:31.000 portid: 1 00:27:31.000 trsvcid: 4420 00:27:31.000 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:31.000 traddr: 10.0.0.1 00:27:31.000 eflags: none 00:27:31.000 sectype: none 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.000 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.260 nvme0n1 00:27:31.260 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.260 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.260 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.261 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.521 nvme0n1 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.521 13:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.521 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.521 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.521 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.521 nvme0n1 00:27:31.521 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.521 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.521 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.521 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.521 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.521 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.521 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.781 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.782 nvme0n1 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.782 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.041 nvme0n1 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.041 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.300 nvme0n1 00:27:32.300 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.300 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.300 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.300 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.300 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.300 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.301 13:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.561 nvme0n1 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.561 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.822 nvme0n1 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.822 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.082 nvme0n1 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.082 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.083 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.083 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.083 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.083 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.083 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.083 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.083 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.083 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.083 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.083 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.083 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.083 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.083 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.342 nvme0n1 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.342 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.343 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.343 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.343 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.343 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.343 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.343 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.343 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.343 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.343 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.343 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.343 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.343 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.343 13:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.604 nvme0n1 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.604 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.866 nvme0n1 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.866 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.127 nvme0n1 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.127 13:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.407 nvme0n1 00:27:34.407 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.408 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.408 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.408 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.408 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.408 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.668 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.928 nvme0n1 00:27:34.928 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.928 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.928 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.928 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.928 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.928 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.928 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.928 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.928 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.928 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.928 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.928 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.928 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.929 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.189 nvme0n1 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:35.189 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.190 13:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.762 nvme0n1 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.762 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.024 nvme0n1 00:27:36.024 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.024 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.024 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.024 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.024 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.024 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.024 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.024 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.024 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.024 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.024 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.286 13:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.546 nvme0n1 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.546 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.547 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.116 nvme0n1 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.116 13:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.686 nvme0n1 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.686 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.259 nvme0n1 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.259 13:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.838 nvme0n1 00:27:38.838 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.838 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.838 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.838 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.838 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.838 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.838 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.839 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.097 13:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.663 nvme0n1 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.663 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.664 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.232 nvme0n1 00:27:40.232 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.232 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.232 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.232 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.232 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.232 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.232 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.232 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.232 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.232 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.493 13:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.066 nvme0n1 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.066 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.327 nvme0n1 00:27:41.327 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.328 nvme0n1 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.328 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.589 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.589 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.589 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:41.589 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.589 13:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.589 nvme0n1 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.589 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.590 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.851 nvme0n1 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.851 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.111 nvme0n1 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.111 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.370 nvme0n1 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.370 13:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.628 nvme0n1 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.628 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.887 nvme0n1 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.887 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.888 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.148 nvme0n1 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.148 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.408 nvme0n1 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.408 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.409 13:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.669 nvme0n1 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.669 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.929 nvme0n1 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.929 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.188 nvme0n1 00:27:44.188 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.188 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.188 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.188 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.188 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.188 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.447 13:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.706 nvme0n1 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.706 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.965 nvme0n1 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.965 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.966 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.966 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.966 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.534 nvme0n1 00:27:45.534 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.534 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.534 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.534 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.534 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.534 13:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.534 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.535 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.535 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.796 nvme0n1 00:27:45.796 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.796 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.796 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.796 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.796 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.796 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.057 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.319 nvme0n1 00:27:46.319 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.319 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.319 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.319 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.319 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.319 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.319 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.319 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.319 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.319 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.579 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:46.580 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.580 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.580 13:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.580 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.839 nvme0n1 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.839 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.408 nvme0n1 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:47.408 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.409 13:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.980 nvme0n1 00:27:47.980 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.980 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.980 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.980 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.980 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.980 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.980 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.980 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.980 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.980 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.242 13:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.814 nvme0n1 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.814 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.383 nvme0n1 00:27:49.383 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.383 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.383 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.383 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.383 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.383 13:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.383 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.383 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.383 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.383 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.643 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.644 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.644 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.644 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.644 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.644 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.644 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.213 nvme0n1 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.213 13:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.785 nvme0n1 00:27:50.785 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.785 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.785 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.785 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.785 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.785 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.785 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.785 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.785 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.785 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.047 nvme0n1 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.047 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.309 nvme0n1 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.309 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.310 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.310 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.310 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.310 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.310 13:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.570 nvme0n1 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.570 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.571 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.571 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.571 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.571 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.571 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.571 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.571 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.571 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.571 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.571 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.571 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.831 nvme0n1 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:51.831 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.832 nvme0n1 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.832 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.093 nvme0n1 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.093 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.354 nvme0n1 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.354 13:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.614 nvme0n1 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.614 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.873 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.873 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.873 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.874 nvme0n1 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.874 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.133 nvme0n1 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.133 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.393 13:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.653 nvme0n1 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.653 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.912 nvme0n1 00:27:53.912 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.912 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.912 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.912 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.912 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.913 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.172 nvme0n1 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.172 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.173 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.173 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.173 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.173 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.173 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.173 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.173 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.173 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.173 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.173 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.173 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.173 13:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.431 nvme0n1 00:27:54.431 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.431 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.431 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.431 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.431 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.431 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.431 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.432 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.432 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.432 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.690 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.691 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.691 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.691 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.950 nvme0n1 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.950 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.209 nvme0n1 00:27:55.209 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.209 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.209 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.209 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.209 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.209 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.470 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.470 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.470 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.471 13:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.732 nvme0n1 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.732 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.993 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.254 nvme0n1 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.254 13:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.827 nvme0n1 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.827 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.401 nvme0n1 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2FjNjk4MzUzYzRiY2FkYjhjYmRlMjlkYmY0Yjk2N2OnIqlo: 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: ]] 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjRjZTM5ZjE3YmJlYTM1OTQzZTgwMmRmZjJiMGQxZDdjNWU0NGE4NDhkZjYwYTdiMGJlM2QyYjk3YTJiZGVmN3vK2as=: 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.401 13:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.141 nvme0n1 00:27:58.141 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.141 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.141 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.141 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.141 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.141 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.141 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.141 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.141 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.141 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.141 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.141 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.142 13:35:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.779 nvme0n1 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.779 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.350 nvme0n1 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWJhNjYyNTM1MGM1MjkwZTFjZGRmYTM0ZDBmY2JhMjM5MDEyNzI4OWFmM2E3YzUwXCoecQ==: 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: ]] 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzY2NmQ1ODI4ZWNmZjlhMDU3OTQwYzE0ZDc0M2ExOTe4nUBu: 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.350 13:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.921 nvme0n1 00:27:59.921 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.921 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.921 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.921 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.921 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.182 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.182 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.182 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.182 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjhkOWVlNjk4Y2Q1YWE5MmRhYjJkYmFmNGE4ZWI4MjA4MGM4ZDIzMzA4Y2NlMDAxZGUyYzQzMjE5MjIzZGZiZaYCVag=: 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.183 13:35:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.754 nvme0n1 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.754 request: 00:28:00.754 { 00:28:00.754 "name": "nvme0", 00:28:00.754 "trtype": "tcp", 00:28:00.754 "traddr": "10.0.0.1", 00:28:00.754 "adrfam": "ipv4", 00:28:00.754 "trsvcid": "4420", 00:28:00.754 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:00.754 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:00.754 "prchk_reftag": false, 00:28:00.754 "prchk_guard": false, 00:28:00.754 "hdgst": false, 00:28:00.754 "ddgst": false, 00:28:00.754 "allow_unrecognized_csi": false, 00:28:00.754 "method": "bdev_nvme_attach_controller", 00:28:00.754 "req_id": 1 00:28:00.754 } 00:28:00.754 Got JSON-RPC error response 00:28:00.754 response: 00:28:00.754 { 00:28:00.754 "code": -5, 00:28:00.754 "message": "Input/output error" 00:28:00.754 } 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.754 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.016 request: 00:28:01.016 { 00:28:01.016 "name": "nvme0", 00:28:01.016 "trtype": "tcp", 00:28:01.016 "traddr": "10.0.0.1", 00:28:01.016 "adrfam": "ipv4", 00:28:01.016 "trsvcid": "4420", 00:28:01.016 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:01.016 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:01.016 "prchk_reftag": false, 00:28:01.016 "prchk_guard": false, 00:28:01.016 "hdgst": false, 00:28:01.016 "ddgst": false, 00:28:01.016 "dhchap_key": "key2", 00:28:01.016 "allow_unrecognized_csi": false, 00:28:01.016 "method": "bdev_nvme_attach_controller", 00:28:01.016 "req_id": 1 00:28:01.016 } 00:28:01.016 Got JSON-RPC error response 00:28:01.016 response: 00:28:01.016 { 00:28:01.016 "code": -5, 00:28:01.016 "message": "Input/output error" 00:28:01.016 } 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.016 request: 00:28:01.016 { 00:28:01.016 "name": "nvme0", 00:28:01.016 "trtype": "tcp", 00:28:01.016 "traddr": "10.0.0.1", 00:28:01.016 "adrfam": "ipv4", 00:28:01.016 "trsvcid": "4420", 00:28:01.016 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:01.016 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:01.016 "prchk_reftag": false, 00:28:01.016 "prchk_guard": false, 00:28:01.016 "hdgst": false, 00:28:01.016 "ddgst": false, 00:28:01.016 "dhchap_key": "key1", 00:28:01.016 "dhchap_ctrlr_key": "ckey2", 00:28:01.016 "allow_unrecognized_csi": false, 00:28:01.016 "method": "bdev_nvme_attach_controller", 00:28:01.016 "req_id": 1 00:28:01.016 } 00:28:01.016 Got JSON-RPC error response 00:28:01.016 response: 00:28:01.016 { 00:28:01.016 "code": -5, 00:28:01.016 "message": "Input/output error" 00:28:01.016 } 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.016 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.278 nvme0n1 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.278 request: 00:28:01.278 { 00:28:01.278 "name": "nvme0", 00:28:01.278 "dhchap_key": "key1", 00:28:01.278 "dhchap_ctrlr_key": "ckey2", 00:28:01.278 "method": "bdev_nvme_set_keys", 00:28:01.278 "req_id": 1 00:28:01.278 } 00:28:01.278 Got JSON-RPC error response 00:28:01.278 response: 00:28:01.278 { 00:28:01.278 "code": -13, 00:28:01.278 "message": "Permission denied" 00:28:01.278 } 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.278 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.538 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:01.538 13:35:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:02.481 13:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.481 13:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:02.481 13:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.481 13:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.481 13:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.481 13:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:02.481 13:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:02.481 13:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.481 13:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.481 13:35:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjUyY2RhZDZjMGQ0OGFjZjI2NDA2ZmI1MDIzYjcwMTg2Y2Y2MTg4N2Y4NDNjZWYwxEzePg==: 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: ]] 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2RlYzI5ZThlNWY4OTU4OTdlMjYzZmM4MTBlNzAyNTFjMzE3ZTgwNzkwMDJiYzg4IGI8hQ==: 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.481 nvme0n1 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.481 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDJiNGJlOWFhMTY5MzA2OWExMmRhMDNlMDIwNzFiMzY44khp: 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: ]] 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjI3ODFlYmFkOWM1YWI5ZjM2ODA3NGI0OWRkOWRkNmTnU4OR: 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.741 request: 00:28:02.741 { 00:28:02.741 "name": "nvme0", 00:28:02.741 "dhchap_key": "key2", 00:28:02.741 "dhchap_ctrlr_key": "ckey1", 00:28:02.741 "method": "bdev_nvme_set_keys", 00:28:02.741 "req_id": 1 00:28:02.741 } 00:28:02.741 Got JSON-RPC error response 00:28:02.741 response: 00:28:02.741 { 00:28:02.741 "code": -13, 00:28:02.741 "message": "Permission denied" 00:28:02.741 } 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:02.741 13:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:03.681 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:03.681 rmmod nvme_tcp 00:28:03.941 rmmod nvme_fabrics 00:28:03.941 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:03.941 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:03.941 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:03.941 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2304005 ']' 00:28:03.941 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2304005 00:28:03.941 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2304005 ']' 00:28:03.941 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2304005 00:28:03.941 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:03.941 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2304005 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2304005' 00:28:03.942 killing process with pid 2304005 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2304005 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2304005 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.942 13:35:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.488 13:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:06.488 13:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:06.488 13:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:06.488 13:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:06.488 13:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:06.488 13:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:06.488 13:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:06.488 13:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:06.488 13:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:06.488 13:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:06.488 13:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:06.488 13:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:06.488 13:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:09.790 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:09.790 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:09.790 13:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.QwY /tmp/spdk.key-null.T2P /tmp/spdk.key-sha256.5AI /tmp/spdk.key-sha384.kH5 /tmp/spdk.key-sha512.STH /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:09.790 13:35:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:13.092 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:13.092 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:13.092 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:13.092 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:13.092 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:13.092 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:13.092 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:13.092 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:13.354 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:13.354 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:13.354 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:13.354 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:13.354 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:13.354 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:13.354 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:13.354 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:13.354 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:13.354 00:28:13.354 real 0m57.316s 00:28:13.354 user 0m51.372s 00:28:13.354 sys 0m15.503s 00:28:13.354 13:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:13.354 13:35:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.354 ************************************ 00:28:13.354 END TEST nvmf_auth_host 00:28:13.354 ************************************ 00:28:13.354 13:35:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:13.354 13:35:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:13.354 13:35:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:13.354 13:35:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:13.354 13:35:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.354 ************************************ 00:28:13.354 START TEST nvmf_digest 00:28:13.354 ************************************ 00:28:13.355 13:35:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:13.617 * Looking for test storage... 00:28:13.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:13.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.617 --rc genhtml_branch_coverage=1 00:28:13.617 --rc genhtml_function_coverage=1 00:28:13.617 --rc genhtml_legend=1 00:28:13.617 --rc geninfo_all_blocks=1 00:28:13.617 --rc geninfo_unexecuted_blocks=1 00:28:13.617 00:28:13.617 ' 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:13.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.617 --rc genhtml_branch_coverage=1 00:28:13.617 --rc genhtml_function_coverage=1 00:28:13.617 --rc genhtml_legend=1 00:28:13.617 --rc geninfo_all_blocks=1 00:28:13.617 --rc geninfo_unexecuted_blocks=1 00:28:13.617 00:28:13.617 ' 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:13.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.617 --rc genhtml_branch_coverage=1 00:28:13.617 --rc genhtml_function_coverage=1 00:28:13.617 --rc genhtml_legend=1 00:28:13.617 --rc geninfo_all_blocks=1 00:28:13.617 --rc geninfo_unexecuted_blocks=1 00:28:13.617 00:28:13.617 ' 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:13.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.617 --rc genhtml_branch_coverage=1 00:28:13.617 --rc genhtml_function_coverage=1 00:28:13.617 --rc genhtml_legend=1 00:28:13.617 --rc geninfo_all_blocks=1 00:28:13.617 --rc geninfo_unexecuted_blocks=1 00:28:13.617 00:28:13.617 ' 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.617 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:13.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:13.618 13:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:21.775 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:21.775 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:21.775 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:21.775 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.775 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:21.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:28:21.776 00:28:21.776 --- 10.0.0.2 ping statistics --- 00:28:21.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.776 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:28:21.776 00:28:21.776 --- 10.0.0.1 ping statistics --- 00:28:21.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.776 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:21.776 ************************************ 00:28:21.776 START TEST nvmf_digest_clean 00:28:21.776 ************************************ 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2320399 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2320399 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2320399 ']' 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.776 13:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:21.776 [2024-12-06 13:36:07.797002] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:28:21.776 [2024-12-06 13:36:07.797065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.776 [2024-12-06 13:36:07.896604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.776 [2024-12-06 13:36:07.947179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.776 [2024-12-06 13:36:07.947229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.776 [2024-12-06 13:36:07.947238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.776 [2024-12-06 13:36:07.947245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.776 [2024-12-06 13:36:07.947251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.776 [2024-12-06 13:36:07.948033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.038 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.038 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:22.038 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.039 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.039 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.039 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.039 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:22.039 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:22.039 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:22.039 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.039 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.300 null0 00:28:22.300 [2024-12-06 13:36:08.745649] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.300 [2024-12-06 13:36:08.769929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2320557 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2320557 /var/tmp/bperf.sock 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2320557 ']' 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:22.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.300 13:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.300 [2024-12-06 13:36:08.829705] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:28:22.300 [2024-12-06 13:36:08.829769] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320557 ] 00:28:22.300 [2024-12-06 13:36:08.923595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.562 [2024-12-06 13:36:08.976800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.135 13:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.135 13:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:23.135 13:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:23.135 13:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:23.135 13:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:23.396 13:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.396 13:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.657 nvme0n1 00:28:23.657 13:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:23.657 13:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:23.918 Running I/O for 2 seconds... 00:28:25.804 18319.00 IOPS, 71.56 MiB/s [2024-12-06T12:36:12.463Z] 19773.00 IOPS, 77.24 MiB/s 00:28:25.804 Latency(us) 00:28:25.804 [2024-12-06T12:36:12.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.804 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:25.804 nvme0n1 : 2.00 19804.77 77.36 0.00 0.00 6456.41 2348.37 17257.81 00:28:25.804 [2024-12-06T12:36:12.463Z] =================================================================================================================== 00:28:25.804 [2024-12-06T12:36:12.463Z] Total : 19804.77 77.36 0.00 0.00 6456.41 2348.37 17257.81 00:28:25.804 { 00:28:25.804 "results": [ 00:28:25.804 { 00:28:25.804 "job": "nvme0n1", 00:28:25.804 "core_mask": "0x2", 00:28:25.804 "workload": "randread", 00:28:25.804 "status": "finished", 00:28:25.804 "queue_depth": 128, 00:28:25.804 "io_size": 4096, 00:28:25.804 "runtime": 2.003255, 00:28:25.804 "iops": 19804.76774050233, 00:28:25.804 "mibps": 77.36237398633723, 00:28:25.804 "io_failed": 0, 00:28:25.804 "io_timeout": 0, 00:28:25.804 "avg_latency_us": 6456.410766412932, 00:28:25.804 "min_latency_us": 2348.3733333333334, 00:28:25.804 "max_latency_us": 17257.81333333333 00:28:25.804 } 00:28:25.804 ], 00:28:25.804 "core_count": 1 00:28:25.804 } 00:28:25.804 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:25.804 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:25.804 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:25.804 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:25.804 | select(.opcode=="crc32c") 00:28:25.804 | "\(.module_name) \(.executed)"' 00:28:25.804 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2320557 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2320557 ']' 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2320557 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2320557 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2320557' 00:28:26.065 killing process with pid 2320557 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2320557 00:28:26.065 Received shutdown signal, test time was about 2.000000 seconds 00:28:26.065 00:28:26.065 Latency(us) 00:28:26.065 [2024-12-06T12:36:12.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.065 [2024-12-06T12:36:12.724Z] =================================================================================================================== 00:28:26.065 [2024-12-06T12:36:12.724Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.065 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2320557 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2321842 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2321842 /var/tmp/bperf.sock 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2321842 ']' 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:26.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.326 13:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.326 [2024-12-06 13:36:12.794866] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:28:26.326 [2024-12-06 13:36:12.794922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2321842 ] 00:28:26.326 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:26.326 Zero copy mechanism will not be used. 00:28:26.326 [2024-12-06 13:36:12.879077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.326 [2024-12-06 13:36:12.908489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.284 13:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:27.284 13:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:27.284 13:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:27.284 13:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:27.284 13:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:27.284 13:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.284 13:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.545 nvme0n1 00:28:27.545 13:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:27.545 13:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:27.806 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:27.806 Zero copy mechanism will not be used. 00:28:27.806 Running I/O for 2 seconds... 00:28:29.686 3371.00 IOPS, 421.38 MiB/s [2024-12-06T12:36:16.345Z] 3931.00 IOPS, 491.38 MiB/s 00:28:29.686 Latency(us) 00:28:29.686 [2024-12-06T12:36:16.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.686 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:29.686 nvme0n1 : 2.01 3928.80 491.10 0.00 0.00 4069.51 781.65 8628.91 00:28:29.686 [2024-12-06T12:36:16.345Z] =================================================================================================================== 00:28:29.686 [2024-12-06T12:36:16.345Z] Total : 3928.80 491.10 0.00 0.00 4069.51 781.65 8628.91 00:28:29.686 { 00:28:29.686 "results": [ 00:28:29.686 { 00:28:29.686 "job": "nvme0n1", 00:28:29.686 "core_mask": "0x2", 00:28:29.686 "workload": "randread", 00:28:29.686 "status": "finished", 00:28:29.686 "queue_depth": 16, 00:28:29.686 "io_size": 131072, 00:28:29.686 "runtime": 2.005191, 00:28:29.686 "iops": 3928.802792352449, 00:28:29.686 "mibps": 491.10034904405614, 00:28:29.686 "io_failed": 0, 00:28:29.686 "io_timeout": 0, 00:28:29.686 "avg_latency_us": 4069.510271642549, 00:28:29.686 "min_latency_us": 781.6533333333333, 00:28:29.686 "max_latency_us": 8628.906666666666 00:28:29.686 } 00:28:29.686 ], 00:28:29.686 "core_count": 1 00:28:29.686 } 00:28:29.686 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:29.686 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:29.686 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:29.686 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:29.686 | select(.opcode=="crc32c") 00:28:29.686 | "\(.module_name) \(.executed)"' 00:28:29.686 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2321842 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2321842 ']' 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2321842 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2321842 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2321842' 00:28:29.945 killing process with pid 2321842 00:28:29.945 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2321842 00:28:29.945 Received shutdown signal, test time was about 2.000000 seconds 00:28:29.945 00:28:29.945 Latency(us) 00:28:29.945 [2024-12-06T12:36:16.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.946 [2024-12-06T12:36:16.605Z] =================================================================================================================== 00:28:29.946 [2024-12-06T12:36:16.605Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.946 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2321842 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2322575 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2322575 /var/tmp/bperf.sock 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2322575 ']' 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:30.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:30.205 13:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:30.205 [2024-12-06 13:36:16.680834] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:28:30.205 [2024-12-06 13:36:16.680889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322575 ] 00:28:30.205 [2024-12-06 13:36:16.762250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.205 [2024-12-06 13:36:16.791378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.145 13:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.145 13:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:31.145 13:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:31.146 13:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:31.146 13:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:31.146 13:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.146 13:36:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.716 nvme0n1 00:28:31.716 13:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:31.716 13:36:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:31.716 Running I/O for 2 seconds... 00:28:33.601 29518.00 IOPS, 115.30 MiB/s [2024-12-06T12:36:20.260Z] 29651.00 IOPS, 115.82 MiB/s 00:28:33.601 Latency(us) 00:28:33.601 [2024-12-06T12:36:20.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.601 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:33.601 nvme0n1 : 2.01 29656.77 115.85 0.00 0.00 4309.10 3099.31 14199.47 00:28:33.601 [2024-12-06T12:36:20.260Z] =================================================================================================================== 00:28:33.601 [2024-12-06T12:36:20.260Z] Total : 29656.77 115.85 0.00 0.00 4309.10 3099.31 14199.47 00:28:33.601 { 00:28:33.601 "results": [ 00:28:33.601 { 00:28:33.601 "job": "nvme0n1", 00:28:33.601 "core_mask": "0x2", 00:28:33.601 "workload": "randwrite", 00:28:33.601 "status": "finished", 00:28:33.601 "queue_depth": 128, 00:28:33.601 "io_size": 4096, 00:28:33.601 "runtime": 2.005276, 00:28:33.601 "iops": 29656.765452735683, 00:28:33.601 "mibps": 115.84674004974876, 00:28:33.601 "io_failed": 0, 00:28:33.601 "io_timeout": 0, 00:28:33.601 "avg_latency_us": 4309.103462810381, 00:28:33.601 "min_latency_us": 3099.306666666667, 00:28:33.601 "max_latency_us": 14199.466666666667 00:28:33.601 } 00:28:33.601 ], 00:28:33.601 "core_count": 1 00:28:33.601 } 00:28:33.601 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:33.601 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:33.601 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:33.601 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:33.601 | select(.opcode=="crc32c") 00:28:33.601 | "\(.module_name) \(.executed)"' 00:28:33.601 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:33.861 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:33.861 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:33.861 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:33.861 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:33.862 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2322575 00:28:33.862 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2322575 ']' 00:28:33.862 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2322575 00:28:33.862 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:33.862 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.862 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2322575 00:28:33.862 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:33.862 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:33.862 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2322575' 00:28:33.862 killing process with pid 2322575 00:28:33.862 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2322575 00:28:33.862 Received shutdown signal, test time was about 2.000000 seconds 00:28:33.862 00:28:33.862 Latency(us) 00:28:33.862 [2024-12-06T12:36:20.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.862 [2024-12-06T12:36:20.521Z] =================================================================================================================== 00:28:33.862 [2024-12-06T12:36:20.521Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:33.862 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2322575 00:28:34.122 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:34.122 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:34.122 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:34.122 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:34.123 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:34.123 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:34.123 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:34.123 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2323263 00:28:34.123 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2323263 /var/tmp/bperf.sock 00:28:34.123 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2323263 ']' 00:28:34.123 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:34.123 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:34.123 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.123 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:34.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:34.123 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.123 13:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:34.123 [2024-12-06 13:36:20.619489] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:28:34.123 [2024-12-06 13:36:20.619547] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2323263 ] 00:28:34.123 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:34.123 Zero copy mechanism will not be used. 00:28:34.123 [2024-12-06 13:36:20.705126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.123 [2024-12-06 13:36:20.733223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.063 13:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.063 13:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:35.063 13:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:35.063 13:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:35.063 13:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:35.063 13:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:35.063 13:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:35.324 nvme0n1 00:28:35.324 13:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:35.324 13:36:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:35.583 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.583 Zero copy mechanism will not be used. 00:28:35.583 Running I/O for 2 seconds... 00:28:37.466 4153.00 IOPS, 519.12 MiB/s [2024-12-06T12:36:24.125Z] 5271.50 IOPS, 658.94 MiB/s 00:28:37.466 Latency(us) 00:28:37.466 [2024-12-06T12:36:24.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.466 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:37.466 nvme0n1 : 2.01 5266.08 658.26 0.00 0.00 3032.64 1228.80 12561.07 00:28:37.466 [2024-12-06T12:36:24.125Z] =================================================================================================================== 00:28:37.466 [2024-12-06T12:36:24.125Z] Total : 5266.08 658.26 0.00 0.00 3032.64 1228.80 12561.07 00:28:37.466 { 00:28:37.466 "results": [ 00:28:37.466 { 00:28:37.466 "job": "nvme0n1", 00:28:37.466 "core_mask": "0x2", 00:28:37.466 "workload": "randwrite", 00:28:37.466 "status": "finished", 00:28:37.466 "queue_depth": 16, 00:28:37.466 "io_size": 131072, 00:28:37.466 "runtime": 2.005665, 00:28:37.466 "iops": 5266.083817586686, 00:28:37.466 "mibps": 658.2604771983357, 00:28:37.466 "io_failed": 0, 00:28:37.466 "io_timeout": 0, 00:28:37.466 "avg_latency_us": 3032.638404342612, 00:28:37.466 "min_latency_us": 1228.8, 00:28:37.466 "max_latency_us": 12561.066666666668 00:28:37.466 } 00:28:37.466 ], 00:28:37.466 "core_count": 1 00:28:37.466 } 00:28:37.466 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:37.466 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:37.466 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:37.466 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:37.466 | select(.opcode=="crc32c") 00:28:37.466 | "\(.module_name) \(.executed)"' 00:28:37.466 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2323263 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2323263 ']' 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2323263 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2323263 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2323263' 00:28:37.726 killing process with pid 2323263 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2323263 00:28:37.726 Received shutdown signal, test time was about 2.000000 seconds 00:28:37.726 00:28:37.726 Latency(us) 00:28:37.726 [2024-12-06T12:36:24.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.726 [2024-12-06T12:36:24.385Z] =================================================================================================================== 00:28:37.726 [2024-12-06T12:36:24.385Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:37.726 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2323263 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2320399 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2320399 ']' 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2320399 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2320399 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2320399' 00:28:37.986 killing process with pid 2320399 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2320399 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2320399 00:28:37.986 00:28:37.986 real 0m16.867s 00:28:37.986 user 0m33.226s 00:28:37.986 sys 0m3.871s 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:37.986 ************************************ 00:28:37.986 END TEST nvmf_digest_clean 00:28:37.986 ************************************ 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.986 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:38.246 ************************************ 00:28:38.246 START TEST nvmf_digest_error 00:28:38.246 ************************************ 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2324087 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2324087 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2324087 ']' 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:38.246 13:36:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.246 [2024-12-06 13:36:24.730298] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:28:38.246 [2024-12-06 13:36:24.730351] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.246 [2024-12-06 13:36:24.821955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.246 [2024-12-06 13:36:24.854975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.246 [2024-12-06 13:36:24.855010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.246 [2024-12-06 13:36:24.855016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.246 [2024-12-06 13:36:24.855020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.246 [2024-12-06 13:36:24.855025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.246 [2024-12-06 13:36:24.855500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.190 [2024-12-06 13:36:25.569464] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.190 null0 00:28:39.190 [2024-12-06 13:36:25.648364] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.190 [2024-12-06 13:36:25.672576] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2324318 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2324318 /var/tmp/bperf.sock 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2324318 ']' 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.190 13:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.190 [2024-12-06 13:36:25.730678] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:28:39.190 [2024-12-06 13:36:25.730733] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324318 ] 00:28:39.190 [2024-12-06 13:36:25.814513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.190 [2024-12-06 13:36:25.844246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.194 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.194 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:40.194 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.194 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.194 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:40.194 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.194 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.194 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.194 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.194 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.454 nvme0n1 00:28:40.454 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:40.454 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.454 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.454 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.454 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:40.454 13:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.454 Running I/O for 2 seconds... 00:28:40.454 [2024-12-06 13:36:27.089971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.454 [2024-12-06 13:36:27.090005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.454 [2024-12-06 13:36:27.090017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.454 [2024-12-06 13:36:27.098742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.454 [2024-12-06 13:36:27.098762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.454 [2024-12-06 13:36:27.098775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.454 [2024-12-06 13:36:27.108365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.454 [2024-12-06 13:36:27.108384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.455 [2024-12-06 13:36:27.108391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.117573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.117591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.117598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.127022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.127039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.127046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.135394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.135412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.135418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.144438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.144463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.144470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.153269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.153286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.153293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.162224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.162241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.162247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.171306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.171323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.171330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.179632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.179653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.179659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.188429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.188446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.188452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.197385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.197402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.197408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.206477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.206494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.206500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.215067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.215084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.215090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.223486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.223503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.223510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.716 [2024-12-06 13:36:27.232783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.716 [2024-12-06 13:36:27.232801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.716 [2024-12-06 13:36:27.232808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.241296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.241314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.241320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.250808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.250825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.250832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.259402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.259420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.259426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.268193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.268210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.268217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.277580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.277598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.277604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.289365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.289382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.289388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.297438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.297458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.297465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.308237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.308254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.308261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.317340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.317358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.317365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.325893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.325910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.325917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.334788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.334805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.334815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.343815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.343832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.343839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.352383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.352401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.352407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.360861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.360878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.360885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.717 [2024-12-06 13:36:27.370327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.717 [2024-12-06 13:36:27.370343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.717 [2024-12-06 13:36:27.370350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.978 [2024-12-06 13:36:27.379602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.978 [2024-12-06 13:36:27.379620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.978 [2024-12-06 13:36:27.379626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.978 [2024-12-06 13:36:27.387522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.978 [2024-12-06 13:36:27.387539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.978 [2024-12-06 13:36:27.387545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.978 [2024-12-06 13:36:27.396834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.978 [2024-12-06 13:36:27.396851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.978 [2024-12-06 13:36:27.396858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.978 [2024-12-06 13:36:27.405651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.978 [2024-12-06 13:36:27.405668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.978 [2024-12-06 13:36:27.405674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.978 [2024-12-06 13:36:27.414521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.978 [2024-12-06 13:36:27.414541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.978 [2024-12-06 13:36:27.414547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.978 [2024-12-06 13:36:27.422929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.978 [2024-12-06 13:36:27.422947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.978 [2024-12-06 13:36:27.422953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.978 [2024-12-06 13:36:27.432068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.978 [2024-12-06 13:36:27.432084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.978 [2024-12-06 13:36:27.432091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.978 [2024-12-06 13:36:27.440874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.440891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.440898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.450936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.450953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.450959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.460155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.460171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.460178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.468503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.468519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.468526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.477098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.477115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.477121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.486705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.486722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.486732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.495174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.495191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.495197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.503873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.503890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.503897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.512777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.512793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.512800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.521168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.521185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.521191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.529897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.529914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.529920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.538805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.538822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.538829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.547771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.547788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.547794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.556840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.556858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.556864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.565835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.565855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.565861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.573975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.573992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.573998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.583144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.583161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.583167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.591215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.591231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.591238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.600556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.600572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.600578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.610272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.610289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.610295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.618707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.618724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.618730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.979 [2024-12-06 13:36:27.630381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:40.979 [2024-12-06 13:36:27.630398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.979 [2024-12-06 13:36:27.630407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.638525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.638542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.638548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.648476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.648493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.648500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.656905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.656922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.656929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.666497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.666514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.666521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.675095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.675112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.675118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.683413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.683430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.683436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.692747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.692764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.692771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.701589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.701606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.701613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.710445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.710467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.710473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.719241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.719258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.719267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.727682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.727700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.727706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.736886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.736903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.736909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.745723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.745740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.745746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.754579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.754596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.754602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.763308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.763325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.763331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.772342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.772359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.772365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.780522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.780539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.780545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.790596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.790613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.790619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.798621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.798640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.798646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.808130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.808147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.241 [2024-12-06 13:36:27.808153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.241 [2024-12-06 13:36:27.816294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.241 [2024-12-06 13:36:27.816311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.242 [2024-12-06 13:36:27.816317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.242 [2024-12-06 13:36:27.825169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.242 [2024-12-06 13:36:27.825188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.242 [2024-12-06 13:36:27.825195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.242 [2024-12-06 13:36:27.834140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.242 [2024-12-06 13:36:27.834157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.242 [2024-12-06 13:36:27.834163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.242 [2024-12-06 13:36:27.844292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.242 [2024-12-06 13:36:27.844309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.242 [2024-12-06 13:36:27.844315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.242 [2024-12-06 13:36:27.851828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.242 [2024-12-06 13:36:27.851846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.242 [2024-12-06 13:36:27.851852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.242 [2024-12-06 13:36:27.860855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.242 [2024-12-06 13:36:27.860872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.242 [2024-12-06 13:36:27.860879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.242 [2024-12-06 13:36:27.871273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.242 [2024-12-06 13:36:27.871290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.242 [2024-12-06 13:36:27.871297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.242 [2024-12-06 13:36:27.883173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.242 [2024-12-06 13:36:27.883191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.242 [2024-12-06 13:36:27.883198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.242 [2024-12-06 13:36:27.892872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.242 [2024-12-06 13:36:27.892890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.242 [2024-12-06 13:36:27.892897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:27.902042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:27.902060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:27.902067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:27.912026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:27.912044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:27.912050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:27.919702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:27.919720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:27.919726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:27.928650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:27.928669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:27.928677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:27.938347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:27.938368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:27.938374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:27.947002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:27.947019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:27.947025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:27.956203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:27.956223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:27.956230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:27.965020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:27.965038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:27.965044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:27.973460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:27.973478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:27.973484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:27.985437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:27.985458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:27.985465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:27.997415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:27.997432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:27.997438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:28.004906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:28.004923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:28.004930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:28.014596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:28.014614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:28.014620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:28.023227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:28.023245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:28.023251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:28.031589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:28.031607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:28.031614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:28.041201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:28.041218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:28.041224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:28.049815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:28.049832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:28.049839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:28.058709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:28.058726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:28.058733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 [2024-12-06 13:36:28.067744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.504 [2024-12-06 13:36:28.067762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.504 [2024-12-06 13:36:28.067768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.504 27865.00 IOPS, 108.85 MiB/s [2024-12-06T12:36:28.164Z] [2024-12-06 13:36:28.076568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.505 [2024-12-06 13:36:28.076585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.505 [2024-12-06 13:36:28.076594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.505 [2024-12-06 13:36:28.085652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.505 [2024-12-06 13:36:28.085670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.505 [2024-12-06 13:36:28.085676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.505 [2024-12-06 13:36:28.094275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.505 [2024-12-06 13:36:28.094293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.505 [2024-12-06 13:36:28.094299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.505 [2024-12-06 13:36:28.103647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.505 [2024-12-06 13:36:28.103666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.505 [2024-12-06 13:36:28.103672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.505 [2024-12-06 13:36:28.111361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.505 [2024-12-06 13:36:28.111378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.505 [2024-12-06 13:36:28.111388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.505 [2024-12-06 13:36:28.120396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.505 [2024-12-06 13:36:28.120413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.505 [2024-12-06 13:36:28.120419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.505 [2024-12-06 13:36:28.130263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.505 [2024-12-06 13:36:28.130281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.505 [2024-12-06 13:36:28.130287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.505 [2024-12-06 13:36:28.139694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.505 [2024-12-06 13:36:28.139712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.505 [2024-12-06 13:36:28.139719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.505 [2024-12-06 13:36:28.149623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.505 [2024-12-06 13:36:28.149640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.505 [2024-12-06 13:36:28.149647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.505 [2024-12-06 13:36:28.158076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.505 [2024-12-06 13:36:28.158094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.505 [2024-12-06 13:36:28.158100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.166661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.166679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.166685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.175970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.175987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.175993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.184415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.184436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.184442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.193124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.193142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.193149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.202855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.202873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.202879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.211813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.211830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.211836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.221622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.221639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.221646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.229813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.229830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.229837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.238762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.238780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.238786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.248506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.248523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.248530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.256767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.256787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.256793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.265865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.265883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.265896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.279048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.279066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.279073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.286416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.286432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.286439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.767 [2024-12-06 13:36:28.296923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.767 [2024-12-06 13:36:28.296941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.767 [2024-12-06 13:36:28.296949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.307580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.307597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.307603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.314872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.314890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.314896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.324399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.324416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.324423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.334426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.334444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.334450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.342665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.342682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.342689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.351064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.351084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.351090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.360211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.360229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.360235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.368730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.368747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.368754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.376866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.376883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.376890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.385654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.385672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.385678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.395977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.395995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.396002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.405732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.405749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.405755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.413601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.413618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.413625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.768 [2024-12-06 13:36:28.422598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:41.768 [2024-12-06 13:36:28.422615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.768 [2024-12-06 13:36:28.422622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.431745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.431763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.431769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.440350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.440366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.440372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.449808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.449828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.449835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.459044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.459061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.459067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.466440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.466462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.466469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.476339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.476357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.476364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.485832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.485849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.485856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.494038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.494056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.494062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.504280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.504300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.504310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.513699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.513717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.513723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.521697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.521715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.521721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.532506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.532524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.532530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.541139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.541157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.541163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.548811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.548828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.548835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.558568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.558585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.558593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.567671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.567689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.567696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.575797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.575814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.575821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.586177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.586198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.586205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.595700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.595717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.595723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.604413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.030 [2024-12-06 13:36:28.604430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.030 [2024-12-06 13:36:28.604437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.030 [2024-12-06 13:36:28.613103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.031 [2024-12-06 13:36:28.613121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.031 [2024-12-06 13:36:28.613127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.031 [2024-12-06 13:36:28.621639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.031 [2024-12-06 13:36:28.621656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.031 [2024-12-06 13:36:28.621663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.031 [2024-12-06 13:36:28.631555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.031 [2024-12-06 13:36:28.631572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.031 [2024-12-06 13:36:28.631578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.031 [2024-12-06 13:36:28.641437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.031 [2024-12-06 13:36:28.641459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.031 [2024-12-06 13:36:28.641466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.031 [2024-12-06 13:36:28.649136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.031 [2024-12-06 13:36:28.649154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.031 [2024-12-06 13:36:28.649160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.031 [2024-12-06 13:36:28.659063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.031 [2024-12-06 13:36:28.659084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.031 [2024-12-06 13:36:28.659093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.031 [2024-12-06 13:36:28.667638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.031 [2024-12-06 13:36:28.667655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.031 [2024-12-06 13:36:28.667662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.031 [2024-12-06 13:36:28.676183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.031 [2024-12-06 13:36:28.676199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.031 [2024-12-06 13:36:28.676206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.031 [2024-12-06 13:36:28.685172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.031 [2024-12-06 13:36:28.685188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.031 [2024-12-06 13:36:28.685195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.293 [2024-12-06 13:36:28.695357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.293 [2024-12-06 13:36:28.695375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.293 [2024-12-06 13:36:28.695382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.293 [2024-12-06 13:36:28.703255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.703272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.703278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.712232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.712248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.712255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.722174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.722191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.722197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.730055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.730072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.730081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.739065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.739086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.739092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.748016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.748035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.748046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.757257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.757273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.757280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.766510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.766527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.766533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.776431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.776448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.776458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.784085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.784102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.784109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.792980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.792997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.793004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.803918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.803935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.803941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.812161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.812178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.812185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.820854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.820872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.820878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.829788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.829806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.829812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.838584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.838602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.838608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.847039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.847057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.847064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.856411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.856432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.856441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.865510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.865527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.865534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.874296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.874313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.874319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.883064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.883081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.883087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.894074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.894092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.894101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.901827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.901843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.901849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.911005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.911023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.911029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.919546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.919563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.919569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.930176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.930192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.930198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.938835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.294 [2024-12-06 13:36:28.938851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.294 [2024-12-06 13:36:28.938858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.294 [2024-12-06 13:36:28.946442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.295 [2024-12-06 13:36:28.946462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.295 [2024-12-06 13:36:28.946469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.556 [2024-12-06 13:36:28.957005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.556 [2024-12-06 13:36:28.957022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.556 [2024-12-06 13:36:28.957029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.556 [2024-12-06 13:36:28.966737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.556 [2024-12-06 13:36:28.966757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.556 [2024-12-06 13:36:28.966765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.556 [2024-12-06 13:36:28.975729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.556 [2024-12-06 13:36:28.975749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.556 [2024-12-06 13:36:28.975756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.556 [2024-12-06 13:36:28.984885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.556 [2024-12-06 13:36:28.984902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.556 [2024-12-06 13:36:28.984909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.556 [2024-12-06 13:36:28.992357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.556 [2024-12-06 13:36:28.992376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.556 [2024-12-06 13:36:28.992384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.556 [2024-12-06 13:36:29.002262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.556 [2024-12-06 13:36:29.002279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.556 [2024-12-06 13:36:29.002285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.556 [2024-12-06 13:36:29.011838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.557 [2024-12-06 13:36:29.011856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.557 [2024-12-06 13:36:29.011866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.557 [2024-12-06 13:36:29.020326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.557 [2024-12-06 13:36:29.020343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.557 [2024-12-06 13:36:29.020349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.557 [2024-12-06 13:36:29.029309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.557 [2024-12-06 13:36:29.029326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.557 [2024-12-06 13:36:29.029333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.557 [2024-12-06 13:36:29.037996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.557 [2024-12-06 13:36:29.038013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.557 [2024-12-06 13:36:29.038019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.557 [2024-12-06 13:36:29.046737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.557 [2024-12-06 13:36:29.046754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.557 [2024-12-06 13:36:29.046761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.557 [2024-12-06 13:36:29.054965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.557 [2024-12-06 13:36:29.054982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.557 [2024-12-06 13:36:29.054988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.557 [2024-12-06 13:36:29.064525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.557 [2024-12-06 13:36:29.064543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.557 [2024-12-06 13:36:29.064549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.557 28040.00 IOPS, 109.53 MiB/s [2024-12-06T12:36:29.216Z] [2024-12-06 13:36:29.073721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15ded60) 00:28:42.557 [2024-12-06 13:36:29.073738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.557 [2024-12-06 13:36:29.073744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.557 00:28:42.557 Latency(us) 00:28:42.557 [2024-12-06T12:36:29.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.557 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:42.557 nvme0n1 : 2.04 27498.93 107.42 0.00 0.00 4558.47 2102.61 46967.47 00:28:42.557 [2024-12-06T12:36:29.216Z] =================================================================================================================== 00:28:42.557 [2024-12-06T12:36:29.216Z] Total : 27498.93 107.42 0.00 0.00 4558.47 2102.61 46967.47 00:28:42.557 { 00:28:42.557 "results": [ 00:28:42.557 { 00:28:42.557 "job": "nvme0n1", 00:28:42.557 "core_mask": "0x2", 00:28:42.557 "workload": "randread", 00:28:42.557 "status": "finished", 00:28:42.557 "queue_depth": 128, 00:28:42.557 "io_size": 4096, 00:28:42.557 "runtime": 2.044007, 00:28:42.557 "iops": 27498.927352010047, 00:28:42.557 "mibps": 107.41768496878925, 00:28:42.557 "io_failed": 0, 00:28:42.557 "io_timeout": 0, 00:28:42.557 "avg_latency_us": 4558.468651674732, 00:28:42.557 "min_latency_us": 2102.6133333333332, 00:28:42.557 "max_latency_us": 46967.46666666667 00:28:42.557 } 00:28:42.557 ], 00:28:42.557 "core_count": 1 00:28:42.557 } 00:28:42.557 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:42.557 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:42.557 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:42.557 | .driver_specific 00:28:42.557 | .nvme_error 00:28:42.557 | .status_code 00:28:42.557 | .command_transient_transport_error' 00:28:42.557 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:42.819 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:28:42.819 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2324318 00:28:42.819 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2324318 ']' 00:28:42.819 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2324318 00:28:42.819 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:42.819 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:42.819 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2324318 00:28:42.819 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:42.819 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:42.819 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2324318' 00:28:42.819 killing process with pid 2324318 00:28:42.819 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2324318 00:28:42.819 Received shutdown signal, test time was about 2.000000 seconds 00:28:42.819 00:28:42.819 Latency(us) 00:28:42.819 [2024-12-06T12:36:29.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.819 [2024-12-06T12:36:29.478Z] =================================================================================================================== 00:28:42.819 [2024-12-06T12:36:29.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.819 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2324318 00:28:43.085 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:43.085 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:43.086 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:43.086 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:43.086 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:43.086 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2325014 00:28:43.086 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2325014 /var/tmp/bperf.sock 00:28:43.086 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2325014 ']' 00:28:43.086 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:43.086 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.086 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.086 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.086 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.086 13:36:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.086 [2024-12-06 13:36:29.533907] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:28:43.087 [2024-12-06 13:36:29.533962] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2325014 ] 00:28:43.087 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:43.087 Zero copy mechanism will not be used. 00:28:43.087 [2024-12-06 13:36:29.614621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.087 [2024-12-06 13:36:29.643836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.029 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.029 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:44.029 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.029 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.029 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:44.029 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.029 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.029 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.029 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.029 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.289 nvme0n1 00:28:44.289 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:44.289 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.289 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.289 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.289 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:44.289 13:36:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.289 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.289 Zero copy mechanism will not be used. 00:28:44.289 Running I/O for 2 seconds... 00:28:44.289 [2024-12-06 13:36:30.838966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.289 [2024-12-06 13:36:30.838998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.289 [2024-12-06 13:36:30.839008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.289 [2024-12-06 13:36:30.849584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.289 [2024-12-06 13:36:30.849608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.289 [2024-12-06 13:36:30.849616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.289 [2024-12-06 13:36:30.859781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.289 [2024-12-06 13:36:30.859801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.289 [2024-12-06 13:36:30.859808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.289 [2024-12-06 13:36:30.871501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.289 [2024-12-06 13:36:30.871520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.289 [2024-12-06 13:36:30.871527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.289 [2024-12-06 13:36:30.882518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.289 [2024-12-06 13:36:30.882544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.289 [2024-12-06 13:36:30.882550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.289 [2024-12-06 13:36:30.893177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.289 [2024-12-06 13:36:30.893195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.289 [2024-12-06 13:36:30.893201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.289 [2024-12-06 13:36:30.904308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.289 [2024-12-06 13:36:30.904326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.289 [2024-12-06 13:36:30.904333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.289 [2024-12-06 13:36:30.915260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.289 [2024-12-06 13:36:30.915277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.289 [2024-12-06 13:36:30.915284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.289 [2024-12-06 13:36:30.925939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.289 [2024-12-06 13:36:30.925958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.289 [2024-12-06 13:36:30.925964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.290 [2024-12-06 13:36:30.935583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.290 [2024-12-06 13:36:30.935601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.290 [2024-12-06 13:36:30.935607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.290 [2024-12-06 13:36:30.945817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.290 [2024-12-06 13:36:30.945836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.290 [2024-12-06 13:36:30.945842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:30.956430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:30.956449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.557 [2024-12-06 13:36:30.956461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:30.967471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:30.967489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.557 [2024-12-06 13:36:30.967495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:30.978841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:30.978860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.557 [2024-12-06 13:36:30.978866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:30.990952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:30.990971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.557 [2024-12-06 13:36:30.990978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:31.002582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:31.002600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.557 [2024-12-06 13:36:31.002606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:31.014765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:31.014783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.557 [2024-12-06 13:36:31.014790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:31.026234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:31.026252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.557 [2024-12-06 13:36:31.026258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:31.038433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:31.038451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.557 [2024-12-06 13:36:31.038462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:31.048982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:31.049000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.557 [2024-12-06 13:36:31.049006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:31.059870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:31.059888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.557 [2024-12-06 13:36:31.059894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:31.070434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:31.070452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.557 [2024-12-06 13:36:31.070466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:31.073284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:31.073302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.557 [2024-12-06 13:36:31.073308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:31.082130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:31.082149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.557 [2024-12-06 13:36:31.082156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.557 [2024-12-06 13:36:31.091120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.557 [2024-12-06 13:36:31.091138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.091144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.095863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.095881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.095887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.102301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.102319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.102326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.112334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.112352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.112359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.120688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.120707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.120713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.125713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.125732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.125739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.133332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.133354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.133360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.138867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.138886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.138893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.147714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.147733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.147739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.157863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.157882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.157889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.166387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.166406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.166412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.174319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.174338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.174344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.178973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.178992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.178999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.187842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.187860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.187867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.197412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.197431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.197437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.204426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.204444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.204451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.558 [2024-12-06 13:36:31.212554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.558 [2024-12-06 13:36:31.212572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.558 [2024-12-06 13:36:31.212579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.223215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.223234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.223241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.229789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.229808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.229814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.240479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.240497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.240504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.250947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.250966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.250972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.259762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.259780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.259786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.268292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.268310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.268316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.273305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.273323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.273333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.280788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.280806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.280813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.287661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.287680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.287686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.296888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.296906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.296913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.305404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.305423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.305429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.313855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.313873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.313880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.325122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.325141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.325147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.337347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.337365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.337371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.348339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.348358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.348364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.358582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.358603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.358610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.363746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.363765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.363771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.374549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.374568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.374575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.385990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.386008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.386015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.397480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.397498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.397504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.409289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.409307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.409314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.420697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.420716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.820 [2024-12-06 13:36:31.420722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.820 [2024-12-06 13:36:31.431641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.820 [2024-12-06 13:36:31.431659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-06 13:36:31.431666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.821 [2024-12-06 13:36:31.443231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.821 [2024-12-06 13:36:31.443251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-06 13:36:31.443260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.821 [2024-12-06 13:36:31.454188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.821 [2024-12-06 13:36:31.454207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-06 13:36:31.454213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.821 [2024-12-06 13:36:31.464315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.821 [2024-12-06 13:36:31.464334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-06 13:36:31.464340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.821 [2024-12-06 13:36:31.472633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:44.821 [2024-12-06 13:36:31.472652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.821 [2024-12-06 13:36:31.472658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.082 [2024-12-06 13:36:31.483141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.082 [2024-12-06 13:36:31.483160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.082 [2024-12-06 13:36:31.483166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.082 [2024-12-06 13:36:31.494782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.082 [2024-12-06 13:36:31.494802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.082 [2024-12-06 13:36:31.494808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.082 [2024-12-06 13:36:31.506716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.082 [2024-12-06 13:36:31.506735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.082 [2024-12-06 13:36:31.506741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.082 [2024-12-06 13:36:31.516368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.082 [2024-12-06 13:36:31.516387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.082 [2024-12-06 13:36:31.516393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.082 [2024-12-06 13:36:31.524737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.082 [2024-12-06 13:36:31.524755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.082 [2024-12-06 13:36:31.524762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.082 [2024-12-06 13:36:31.534703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.082 [2024-12-06 13:36:31.534722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.082 [2024-12-06 13:36:31.534731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.082 [2024-12-06 13:36:31.539438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.539468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.539475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.543951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.543969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.543976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.549475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.549494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.549500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.558612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.558631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.558637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.563139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.563158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.563164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.567620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.567639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.567645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.573517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.573534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.573540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.581773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.581792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.581798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.585721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.585739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.585745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.593563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.593581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.593587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.605267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.605285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.605292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.610247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.610265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.610271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.615298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.615317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.615324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.625915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.625933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.625939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.637050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.637068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.637075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.648742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.648761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.648767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.661002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.661021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.661031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.673736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.673755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.673761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.686508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.686527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.686533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.697637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.697655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.697661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.705209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.705227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.705233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.709639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.709657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.709664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.714847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.714866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.714872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.719221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.719239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.719245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.723428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.723446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.723452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.730743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.730765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.730771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.083 [2024-12-06 13:36:31.736221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.083 [2024-12-06 13:36:31.736239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.083 [2024-12-06 13:36:31.736246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.346 [2024-12-06 13:36:31.740582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.346 [2024-12-06 13:36:31.740601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.346 [2024-12-06 13:36:31.740607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.346 [2024-12-06 13:36:31.749957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.346 [2024-12-06 13:36:31.749976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.346 [2024-12-06 13:36:31.749982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.346 [2024-12-06 13:36:31.760061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.346 [2024-12-06 13:36:31.760079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.346 [2024-12-06 13:36:31.760085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.346 [2024-12-06 13:36:31.767780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.346 [2024-12-06 13:36:31.767799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.346 [2024-12-06 13:36:31.767805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.346 [2024-12-06 13:36:31.772169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.346 [2024-12-06 13:36:31.772188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.346 [2024-12-06 13:36:31.772195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.346 [2024-12-06 13:36:31.776852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.346 [2024-12-06 13:36:31.776871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.346 [2024-12-06 13:36:31.776877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.781223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.781242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.781248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.785624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.785642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.785649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.795108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.795126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.795132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.799493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.799511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.799517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.806528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.806546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.806552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.814604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.814622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.814628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.826039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.826057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.826063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.347 3531.00 IOPS, 441.38 MiB/s [2024-12-06T12:36:32.006Z] [2024-12-06 13:36:31.838360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.838379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.838385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.850539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.850565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.850572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.862903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.862921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.862931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.872727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.872745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.872751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.878414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.878432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.878438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.884450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.884473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.884479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.891247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.891265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.891271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.895854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.895872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.895878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.900143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.900162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.900168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.904503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.904521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.904527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.911884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.911903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.911909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.916442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.916468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.916475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.925019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.925038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.925044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.929331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.929348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.929355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.933706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.933724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.933731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.942139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.942157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.942163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.946437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.946461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.946467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.953266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.953284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.953291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.964025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.964043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.347 [2024-12-06 13:36:31.964049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.347 [2024-12-06 13:36:31.968376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.347 [2024-12-06 13:36:31.968394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.348 [2024-12-06 13:36:31.968402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.348 [2024-12-06 13:36:31.974585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.348 [2024-12-06 13:36:31.974603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.348 [2024-12-06 13:36:31.974610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.348 [2024-12-06 13:36:31.982177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.348 [2024-12-06 13:36:31.982196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.348 [2024-12-06 13:36:31.982203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.348 [2024-12-06 13:36:31.989608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.348 [2024-12-06 13:36:31.989627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.348 [2024-12-06 13:36:31.989633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.348 [2024-12-06 13:36:31.998129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.348 [2024-12-06 13:36:31.998147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.348 [2024-12-06 13:36:31.998153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.610 [2024-12-06 13:36:32.005548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.610 [2024-12-06 13:36:32.005567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.610 [2024-12-06 13:36:32.005574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.610 [2024-12-06 13:36:32.010837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.610 [2024-12-06 13:36:32.010856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.610 [2024-12-06 13:36:32.010862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.610 [2024-12-06 13:36:32.017031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.610 [2024-12-06 13:36:32.017049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.610 [2024-12-06 13:36:32.017055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.610 [2024-12-06 13:36:32.023981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.610 [2024-12-06 13:36:32.023999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.610 [2024-12-06 13:36:32.024006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.610 [2024-12-06 13:36:32.031379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.610 [2024-12-06 13:36:32.031397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.031406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.036258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.036276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.036282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.038797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.038815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.038821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.047742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.047760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.047766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.055718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.055736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.055742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.061775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.061792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.061798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.071210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.071228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.071234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.075881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.075898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.075904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.080333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.080350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.080356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.084805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.084825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.084831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.089287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.089305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.089311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.093573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.093590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.093597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.100079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.100097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.100103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.104717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.104734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.104741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.111301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.111318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.111325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.118714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.118731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.118737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.129153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.129171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.129178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.140475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.140492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.140498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.148909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.148927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.148933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.153544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.153562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.153568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.157813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.157831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.157838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.168980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.168998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.169004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.178441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.178464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.178471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.188736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.188753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.188760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.199839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.199858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.199865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.211220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.211237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.211244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.611 [2024-12-06 13:36:32.223073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.611 [2024-12-06 13:36:32.223092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.611 [2024-12-06 13:36:32.223101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.612 [2024-12-06 13:36:32.235965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.612 [2024-12-06 13:36:32.235983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.612 [2024-12-06 13:36:32.235989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.612 [2024-12-06 13:36:32.248745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.612 [2024-12-06 13:36:32.248763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.612 [2024-12-06 13:36:32.248769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.612 [2024-12-06 13:36:32.260691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.612 [2024-12-06 13:36:32.260708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.612 [2024-12-06 13:36:32.260715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.873 [2024-12-06 13:36:32.272829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.873 [2024-12-06 13:36:32.272847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.873 [2024-12-06 13:36:32.272853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.873 [2024-12-06 13:36:32.285951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.873 [2024-12-06 13:36:32.285969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.873 [2024-12-06 13:36:32.285975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.873 [2024-12-06 13:36:32.298868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.298886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.298892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.311490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.311508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.311514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.324042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.324061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.324067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.335857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.335875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.335881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.347210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.347228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.347234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.355180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.355198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.355205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.364522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.364540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.364546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.374287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.374305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.374311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.384914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.384933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.384939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.395846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.395864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.395871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.407950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.407969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.407975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.418106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.418124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.418134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.430102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.430120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.430126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.439565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.439583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.439590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.448723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.448741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.448747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.458999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.459017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.459023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.470237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.470255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.470261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.479949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.479967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.479973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.489226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.489244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.489250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.499781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.499799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.499805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.505388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.505409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.505415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.514202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.514220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.514226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:45.874 [2024-12-06 13:36:32.525035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:45.874 [2024-12-06 13:36:32.525054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.874 [2024-12-06 13:36:32.525060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.536286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.536304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.536311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.546393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.546411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.546417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.554892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.554909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.554916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.565185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.565203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.565210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.576233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.576251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.576257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.588713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.588731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.588737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.601655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.601673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.601679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.613830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.613848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.613854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.626018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.626036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.626043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.638638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.638656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.638662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.651083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.651101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.651107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.664201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.664218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.664224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.675394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.675411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.675417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.686847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.686864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.136 [2024-12-06 13:36:32.686871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.136 [2024-12-06 13:36:32.697551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.136 [2024-12-06 13:36:32.697569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.137 [2024-12-06 13:36:32.697578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.137 [2024-12-06 13:36:32.706368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.137 [2024-12-06 13:36:32.706385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.137 [2024-12-06 13:36:32.706391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.137 [2024-12-06 13:36:32.716504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.137 [2024-12-06 13:36:32.716522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.137 [2024-12-06 13:36:32.716528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.137 [2024-12-06 13:36:32.725820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.137 [2024-12-06 13:36:32.725838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.137 [2024-12-06 13:36:32.725844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.137 [2024-12-06 13:36:32.733876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.137 [2024-12-06 13:36:32.733893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.137 [2024-12-06 13:36:32.733899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.137 [2024-12-06 13:36:32.744735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.137 [2024-12-06 13:36:32.744751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.137 [2024-12-06 13:36:32.744758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.137 [2024-12-06 13:36:32.754617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.137 [2024-12-06 13:36:32.754635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.137 [2024-12-06 13:36:32.754641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.137 [2024-12-06 13:36:32.765752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.137 [2024-12-06 13:36:32.765769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.137 [2024-12-06 13:36:32.765776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.137 [2024-12-06 13:36:32.777764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.137 [2024-12-06 13:36:32.777782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.137 [2024-12-06 13:36:32.777789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.137 [2024-12-06 13:36:32.789318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.137 [2024-12-06 13:36:32.789339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.137 [2024-12-06 13:36:32.789345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.397 [2024-12-06 13:36:32.800968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.397 [2024-12-06 13:36:32.800986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.397 [2024-12-06 13:36:32.800993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:46.397 [2024-12-06 13:36:32.813042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.397 [2024-12-06 13:36:32.813060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.397 [2024-12-06 13:36:32.813067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:46.397 [2024-12-06 13:36:32.824789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.397 [2024-12-06 13:36:32.824807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.397 [2024-12-06 13:36:32.824813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:46.397 [2024-12-06 13:36:32.836091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfa48c0) 00:28:46.397 [2024-12-06 13:36:32.836108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.397 [2024-12-06 13:36:32.836114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:46.397 3479.50 IOPS, 434.94 MiB/s 00:28:46.397 Latency(us) 00:28:46.397 [2024-12-06T12:36:33.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.397 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:46.397 nvme0n1 : 2.00 3481.99 435.25 0.00 0.00 4592.10 515.41 13325.65 00:28:46.397 [2024-12-06T12:36:33.056Z] =================================================================================================================== 00:28:46.397 [2024-12-06T12:36:33.056Z] Total : 3481.99 435.25 0.00 0.00 4592.10 515.41 13325.65 00:28:46.397 { 00:28:46.397 "results": [ 00:28:46.397 { 00:28:46.397 "job": "nvme0n1", 00:28:46.397 "core_mask": "0x2", 00:28:46.397 "workload": "randread", 00:28:46.397 "status": "finished", 00:28:46.398 "queue_depth": 16, 00:28:46.398 "io_size": 131072, 00:28:46.398 "runtime": 2.003165, 00:28:46.398 "iops": 3481.9897512186963, 00:28:46.398 "mibps": 435.24871890233703, 00:28:46.398 "io_failed": 0, 00:28:46.398 "io_timeout": 0, 00:28:46.398 "avg_latency_us": 4592.100962485066, 00:28:46.398 "min_latency_us": 515.4133333333333, 00:28:46.398 "max_latency_us": 13325.653333333334 00:28:46.398 } 00:28:46.398 ], 00:28:46.398 "core_count": 1 00:28:46.398 } 00:28:46.398 13:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:46.398 13:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:46.398 13:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:46.398 13:36:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:46.398 | .driver_specific 00:28:46.398 | .nvme_error 00:28:46.398 | .status_code 00:28:46.398 | .command_transient_transport_error' 00:28:46.398 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 225 > 0 )) 00:28:46.398 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2325014 00:28:46.398 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2325014 ']' 00:28:46.398 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2325014 00:28:46.398 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:46.398 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.398 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2325014 00:28:46.658 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:46.658 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:46.658 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2325014' 00:28:46.658 killing process with pid 2325014 00:28:46.658 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2325014 00:28:46.658 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.658 00:28:46.658 Latency(us) 00:28:46.658 [2024-12-06T12:36:33.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.658 [2024-12-06T12:36:33.317Z] =================================================================================================================== 00:28:46.658 [2024-12-06T12:36:33.317Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.658 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2325014 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2325698 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2325698 /var/tmp/bperf.sock 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2325698 ']' 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.659 13:36:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.659 [2024-12-06 13:36:33.263106] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:28:46.659 [2024-12-06 13:36:33.263160] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2325698 ] 00:28:46.919 [2024-12-06 13:36:33.347875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.919 [2024-12-06 13:36:33.376801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.489 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.489 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:47.489 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:47.489 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:47.750 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:47.750 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.750 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.750 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.750 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.750 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.010 nvme0n1 00:28:48.010 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:48.010 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.010 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.010 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.010 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:48.010 13:36:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:48.271 Running I/O for 2 seconds... 00:28:48.271 [2024-12-06 13:36:34.736188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee9e10 00:28:48.272 [2024-12-06 13:36:34.737387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.737416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.743147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee12d8 00:28:48.272 [2024-12-06 13:36:34.743854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.743872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.751605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee23b8 00:28:48.272 [2024-12-06 13:36:34.752307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.752323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.760023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee3498 00:28:48.272 [2024-12-06 13:36:34.760714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.760731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.768458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee4578 00:28:48.272 [2024-12-06 13:36:34.769109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.769126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.776864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee5658 00:28:48.272 [2024-12-06 13:36:34.777515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.777532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.785253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee6738 00:28:48.272 [2024-12-06 13:36:34.785955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.785971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.793677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eed4e8 00:28:48.272 [2024-12-06 13:36:34.794366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.794383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.802069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eee5c8 00:28:48.272 [2024-12-06 13:36:34.802782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.802798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.810460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eef6a8 00:28:48.272 [2024-12-06 13:36:34.811154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.811170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.818847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef0788 00:28:48.272 [2024-12-06 13:36:34.819539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.819555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.827246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef1868 00:28:48.272 [2024-12-06 13:36:34.827924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.827941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.835631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef2948 00:28:48.272 [2024-12-06 13:36:34.836333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.836349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.844012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef3a28 00:28:48.272 [2024-12-06 13:36:34.844665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.844682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.852377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef4b08 00:28:48.272 [2024-12-06 13:36:34.853067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.853084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.860770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee8088 00:28:48.272 [2024-12-06 13:36:34.861465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.861482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.869147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee6fa8 00:28:48.272 [2024-12-06 13:36:34.869812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.869829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.877529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee0630 00:28:48.272 [2024-12-06 13:36:34.878235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.878252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.885918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee1710 00:28:48.272 [2024-12-06 13:36:34.886623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.886639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.894290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee27f0 00:28:48.272 [2024-12-06 13:36:34.894980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.894996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.902671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee38d0 00:28:48.272 [2024-12-06 13:36:34.903362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.272 [2024-12-06 13:36:34.903381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.272 [2024-12-06 13:36:34.911045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee49b0 00:28:48.273 [2024-12-06 13:36:34.911723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.273 [2024-12-06 13:36:34.911739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.273 [2024-12-06 13:36:34.919434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee5a90 00:28:48.273 [2024-12-06 13:36:34.920130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.273 [2024-12-06 13:36:34.920146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.534 [2024-12-06 13:36:34.927851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eec840 00:28:48.534 [2024-12-06 13:36:34.928529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.534 [2024-12-06 13:36:34.928545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.534 [2024-12-06 13:36:34.936227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eed920 00:28:48.535 [2024-12-06 13:36:34.936921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:34.936937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:34.944616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeea00 00:28:48.535 [2024-12-06 13:36:34.945316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:34.945332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:34.952983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eefae0 00:28:48.535 [2024-12-06 13:36:34.953652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:34.953668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:34.961360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef0bc0 00:28:48.535 [2024-12-06 13:36:34.962053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:34.962069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:34.969740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef1ca0 00:28:48.535 [2024-12-06 13:36:34.970430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:34.970446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:34.978116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef2d80 00:28:48.535 [2024-12-06 13:36:34.978822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:34.978838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:34.986496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef3e60 00:28:48.535 [2024-12-06 13:36:34.987201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:34.987217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:34.994880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef4f40 00:28:48.535 [2024-12-06 13:36:34.995574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:34.995590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:35.003264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee73e0 00:28:48.535 [2024-12-06 13:36:35.003973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:35.003991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:35.011684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee01f8 00:28:48.535 [2024-12-06 13:36:35.012397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:35.012413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:35.020068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee12d8 00:28:48.535 [2024-12-06 13:36:35.020724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:35.020740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:35.028437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee23b8 00:28:48.535 [2024-12-06 13:36:35.029135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:35.029151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:35.036838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee3498 00:28:48.535 [2024-12-06 13:36:35.037539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:35.037555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:35.045221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee4578 00:28:48.535 [2024-12-06 13:36:35.045915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:35.045931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:35.053603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee5658 00:28:48.535 [2024-12-06 13:36:35.054301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:35.054316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:35.061993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee6738 00:28:48.535 [2024-12-06 13:36:35.062690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:35.062706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:35.070382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eed4e8 00:28:48.535 [2024-12-06 13:36:35.071085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:35.071101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:35.078771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eee5c8 00:28:48.535 [2024-12-06 13:36:35.079494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:35.079510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:35.087197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eef6a8 00:28:48.535 [2024-12-06 13:36:35.087867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:35.087882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:35.095565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef0788 00:28:48.535 [2024-12-06 13:36:35.096271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.535 [2024-12-06 13:36:35.096288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.535 [2024-12-06 13:36:35.103957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef1868 00:28:48.535 [2024-12-06 13:36:35.104616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.536 [2024-12-06 13:36:35.104632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.536 [2024-12-06 13:36:35.112341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef2948 00:28:48.536 [2024-12-06 13:36:35.113003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.536 [2024-12-06 13:36:35.113019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.536 [2024-12-06 13:36:35.120733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef3a28 00:28:48.536 [2024-12-06 13:36:35.121394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.536 [2024-12-06 13:36:35.121412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.536 [2024-12-06 13:36:35.129095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef4b08 00:28:48.536 [2024-12-06 13:36:35.129808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.536 [2024-12-06 13:36:35.129824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.536 [2024-12-06 13:36:35.137497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee8088 00:28:48.536 [2024-12-06 13:36:35.138147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.536 [2024-12-06 13:36:35.138163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.536 [2024-12-06 13:36:35.145886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee6fa8 00:28:48.536 [2024-12-06 13:36:35.146535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.536 [2024-12-06 13:36:35.146551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.536 [2024-12-06 13:36:35.154272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee0630 00:28:48.536 [2024-12-06 13:36:35.154989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.536 [2024-12-06 13:36:35.155006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.536 [2024-12-06 13:36:35.162658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee1710 00:28:48.536 [2024-12-06 13:36:35.163363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.536 [2024-12-06 13:36:35.163378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.536 [2024-12-06 13:36:35.171044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee27f0 00:28:48.536 [2024-12-06 13:36:35.171762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.536 [2024-12-06 13:36:35.171778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.536 [2024-12-06 13:36:35.179415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee38d0 00:28:48.536 [2024-12-06 13:36:35.180111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.536 [2024-12-06 13:36:35.180127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.536 [2024-12-06 13:36:35.187976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee49b0 00:28:48.536 [2024-12-06 13:36:35.188654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.536 [2024-12-06 13:36:35.188670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.797 [2024-12-06 13:36:35.196359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee5a90 00:28:48.797 [2024-12-06 13:36:35.197055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.797 [2024-12-06 13:36:35.197074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.797 [2024-12-06 13:36:35.204791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eec840 00:28:48.797 [2024-12-06 13:36:35.205499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.797 [2024-12-06 13:36:35.205515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.797 [2024-12-06 13:36:35.213173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eed920 00:28:48.797 [2024-12-06 13:36:35.213830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.797 [2024-12-06 13:36:35.213846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.797 [2024-12-06 13:36:35.221553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeea00 00:28:48.797 [2024-12-06 13:36:35.222240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.797 [2024-12-06 13:36:35.222257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.797 [2024-12-06 13:36:35.229938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eefae0 00:28:48.797 [2024-12-06 13:36:35.230655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.797 [2024-12-06 13:36:35.230672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.797 [2024-12-06 13:36:35.238338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef0bc0 00:28:48.797 [2024-12-06 13:36:35.238997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.797 [2024-12-06 13:36:35.239013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.797 [2024-12-06 13:36:35.246730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef1ca0 00:28:48.797 [2024-12-06 13:36:35.247436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.797 [2024-12-06 13:36:35.247452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.255124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef2d80 00:28:48.798 [2024-12-06 13:36:35.255836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.255852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.263497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef3e60 00:28:48.798 [2024-12-06 13:36:35.264202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.264219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.271866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef4f40 00:28:48.798 [2024-12-06 13:36:35.272563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.272580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.280245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee73e0 00:28:48.798 [2024-12-06 13:36:35.280941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.280957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.288645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee01f8 00:28:48.798 [2024-12-06 13:36:35.289330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.289346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.297032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee12d8 00:28:48.798 [2024-12-06 13:36:35.297723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.297739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.305424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee23b8 00:28:48.798 [2024-12-06 13:36:35.306118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.306134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.313800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee3498 00:28:48.798 [2024-12-06 13:36:35.314510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.314526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.322183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee4578 00:28:48.798 [2024-12-06 13:36:35.322879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.322895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.330603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee5658 00:28:48.798 [2024-12-06 13:36:35.331309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.331325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.339007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee6738 00:28:48.798 [2024-12-06 13:36:35.339712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.339728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.347407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eed4e8 00:28:48.798 [2024-12-06 13:36:35.348102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.348119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.355786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eee5c8 00:28:48.798 [2024-12-06 13:36:35.356447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.356466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.364166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eef6a8 00:28:48.798 [2024-12-06 13:36:35.364858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.364874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.372554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef0788 00:28:48.798 [2024-12-06 13:36:35.373262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.373278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.380970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef1868 00:28:48.798 [2024-12-06 13:36:35.381657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.381674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.389357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef2948 00:28:48.798 [2024-12-06 13:36:35.390050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.390066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.397765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef3a28 00:28:48.798 [2024-12-06 13:36:35.398452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.398471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.406136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef4b08 00:28:48.798 [2024-12-06 13:36:35.406843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.406859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.414528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee8088 00:28:48.798 [2024-12-06 13:36:35.415216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.415235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.422923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee6fa8 00:28:48.798 [2024-12-06 13:36:35.423618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.423634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.431311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee0630 00:28:48.798 [2024-12-06 13:36:35.432020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.432036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.439712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee1710 00:28:48.798 [2024-12-06 13:36:35.440403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.440419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:48.798 [2024-12-06 13:36:35.448082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee27f0 00:28:48.798 [2024-12-06 13:36:35.448747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:48.798 [2024-12-06 13:36:35.448762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.456450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee38d0 00:28:49.059 [2024-12-06 13:36:35.457145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.457162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.464846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee49b0 00:28:49.059 [2024-12-06 13:36:35.465524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.465540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.473232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee5a90 00:28:49.059 [2024-12-06 13:36:35.473930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.473946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.481627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eec840 00:28:49.059 [2024-12-06 13:36:35.482307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.482323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.489462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016edf550 00:28:49.059 [2024-12-06 13:36:35.490114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.490130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.498858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee88f8 00:28:49.059 [2024-12-06 13:36:35.499664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.499680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.507619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efc560 00:28:49.059 [2024-12-06 13:36:35.508533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.508549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.516223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eee190 00:28:49.059 [2024-12-06 13:36:35.516840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.516856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.524490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eee5c8 00:28:49.059 [2024-12-06 13:36:35.525204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.525220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.533148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef7538 00:28:49.059 [2024-12-06 13:36:35.534094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.534111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.540971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efe2e8 00:28:49.059 [2024-12-06 13:36:35.541894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.541910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.550348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee38d0 00:28:49.059 [2024-12-06 13:36:35.551406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.551422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.558736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeb328 00:28:49.059 [2024-12-06 13:36:35.559768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.559784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.567120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eea248 00:28:49.059 [2024-12-06 13:36:35.568149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.568165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.576592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee9168 00:28:49.059 [2024-12-06 13:36:35.578111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.578127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.582972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ede038 00:28:49.059 [2024-12-06 13:36:35.583761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.059 [2024-12-06 13:36:35.583777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:49.059 [2024-12-06 13:36:35.592179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efd640 00:28:49.059 [2024-12-06 13:36:35.593109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.593125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.600569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee9168 00:28:49.060 [2024-12-06 13:36:35.601498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.601514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.609082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeff18 00:28:49.060 [2024-12-06 13:36:35.610021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.610037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.617478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016edece0 00:28:49.060 [2024-12-06 13:36:35.618423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.618439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.625867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efbcf0 00:28:49.060 [2024-12-06 13:36:35.626805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.626821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.634240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efcdd0 00:28:49.060 [2024-12-06 13:36:35.635179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.635198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.642621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efef90 00:28:49.060 [2024-12-06 13:36:35.643551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.643568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.651052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee2c28 00:28:49.060 [2024-12-06 13:36:35.652022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.652038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.659443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee0a68 00:28:49.060 [2024-12-06 13:36:35.660401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.660417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.667850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee4de8 00:28:49.060 [2024-12-06 13:36:35.668755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.668772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.676218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee99d8 00:28:49.060 [2024-12-06 13:36:35.677160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.677176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.684598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eef6a8 00:28:49.060 [2024-12-06 13:36:35.685537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.685553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.692980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ede470 00:28:49.060 [2024-12-06 13:36:35.693881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.693897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.701377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeb328 00:28:49.060 [2024-12-06 13:36:35.702338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.702355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.060 [2024-12-06 13:36:35.709776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efd640 00:28:49.060 [2024-12-06 13:36:35.710724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.060 [2024-12-06 13:36:35.710741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.320 [2024-12-06 13:36:35.718180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee3498 00:28:49.320 [2024-12-06 13:36:35.719118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.320 [2024-12-06 13:36:35.719134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:49.320 30049.00 IOPS, 117.38 MiB/s [2024-12-06T12:36:35.979Z] [2024-12-06 13:36:35.726691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef92c0 00:28:49.320 [2024-12-06 13:36:35.727527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.320 [2024-12-06 13:36:35.727544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.320 [2024-12-06 13:36:35.735073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efb480 00:28:49.320 [2024-12-06 13:36:35.735955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.320 [2024-12-06 13:36:35.735971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.320 [2024-12-06 13:36:35.743555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee5220 00:28:49.320 [2024-12-06 13:36:35.744404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.320 [2024-12-06 13:36:35.744421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.320 [2024-12-06 13:36:35.751957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef92c0 00:28:49.320 [2024-12-06 13:36:35.752828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.320 [2024-12-06 13:36:35.752845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.320 [2024-12-06 13:36:35.760359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efb480 00:28:49.320 [2024-12-06 13:36:35.761226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.320 [2024-12-06 13:36:35.761241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:49.320 [2024-12-06 13:36:35.768640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eea680 00:28:49.320 [2024-12-06 13:36:35.769664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.320 [2024-12-06 13:36:35.769679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.320 [2024-12-06 13:36:35.777140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee5658 00:28:49.320 [2024-12-06 13:36:35.778043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.320 [2024-12-06 13:36:35.778058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.320 [2024-12-06 13:36:35.785813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efbcf0 00:28:49.320 [2024-12-06 13:36:35.786805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.786821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.795293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efe2e8 00:28:49.321 [2024-12-06 13:36:35.796799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.796815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.801350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee23b8 00:28:49.321 [2024-12-06 13:36:35.802080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.802096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.809858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eecc78 00:28:49.321 [2024-12-06 13:36:35.810599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.810615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.818218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eedd58 00:28:49.321 [2024-12-06 13:36:35.818926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.818942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.826604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeee38 00:28:49.321 [2024-12-06 13:36:35.827342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.827358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.834984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee9e10 00:28:49.321 [2024-12-06 13:36:35.835737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.835753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.843371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee8d30 00:28:49.321 [2024-12-06 13:36:35.844112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.844128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.851764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee5220 00:28:49.321 [2024-12-06 13:36:35.852507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.852526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.860122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee4140 00:28:49.321 [2024-12-06 13:36:35.860823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.860839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.868502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee3060 00:28:49.321 [2024-12-06 13:36:35.869244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.869260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.876874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee1f80 00:28:49.321 [2024-12-06 13:36:35.877629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.877646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.885259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef1ca0 00:28:49.321 [2024-12-06 13:36:35.886002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.886018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.893638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef0bc0 00:28:49.321 [2024-12-06 13:36:35.894370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.894386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.901992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eebfd0 00:28:49.321 [2024-12-06 13:36:35.902736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.902752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.910356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef3a28 00:28:49.321 [2024-12-06 13:36:35.911117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.911133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.918737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee0a68 00:28:49.321 [2024-12-06 13:36:35.919475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.919491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.927110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef0350 00:28:49.321 [2024-12-06 13:36:35.927848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.927864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.935509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef46d0 00:28:49.321 [2024-12-06 13:36:35.936264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.936280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.943896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee6300 00:28:49.321 [2024-12-06 13:36:35.944640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.944656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.952264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eed0b0 00:28:49.321 [2024-12-06 13:36:35.952998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.953014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.960628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eee190 00:28:49.321 [2024-12-06 13:36:35.961374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.961390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.321 [2024-12-06 13:36:35.969000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef7970 00:28:49.321 [2024-12-06 13:36:35.969722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.321 [2024-12-06 13:36:35.969738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:35.977411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee99d8 00:28:49.582 [2024-12-06 13:36:35.978155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:35.978172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:35.985795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee88f8 00:28:49.582 [2024-12-06 13:36:35.986524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:35.986540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:35.994166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee4de8 00:28:49.582 [2024-12-06 13:36:35.994914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:35.994930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:36.002531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee3498 00:28:49.582 [2024-12-06 13:36:36.003284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:36.003300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:36.010344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef35f0 00:28:49.582 [2024-12-06 13:36:36.011067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:36.011083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:36.019586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef3e60 00:28:49.582 [2024-12-06 13:36:36.020430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:36.020446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:36.028111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eea680 00:28:49.582 [2024-12-06 13:36:36.028979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:36.028995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:36.036499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efd208 00:28:49.582 [2024-12-06 13:36:36.037378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:36.037394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:36.044861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efda78 00:28:49.582 [2024-12-06 13:36:36.045704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:36.045720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:36.053217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef6cc8 00:28:49.582 [2024-12-06 13:36:36.054031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:36.054047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:36.061607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeb760 00:28:49.582 [2024-12-06 13:36:36.062423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:36.062439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:36.069987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee0630 00:28:49.582 [2024-12-06 13:36:36.070828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:36.070843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:36.078357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efb048 00:28:49.582 [2024-12-06 13:36:36.079215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:36.079231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:36.086728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee7c50 00:28:49.582 [2024-12-06 13:36:36.087590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.582 [2024-12-06 13:36:36.087606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.582 [2024-12-06 13:36:36.095111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef6890 00:28:49.582 [2024-12-06 13:36:36.095976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.095992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.102922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee9e10 00:28:49.583 [2024-12-06 13:36:36.103780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.103796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.112120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef4f40 00:28:49.583 [2024-12-06 13:36:36.112970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.112986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.120780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef4f40 00:28:49.583 [2024-12-06 13:36:36.121523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.121539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.129142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eed4e8 00:28:49.583 [2024-12-06 13:36:36.129754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.129769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.137537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee5ec8 00:28:49.583 [2024-12-06 13:36:36.138287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.138303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.145911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef4f40 00:28:49.583 [2024-12-06 13:36:36.146672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.146690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.154340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eed4e8 00:28:49.583 [2024-12-06 13:36:36.155121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.155137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.162723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee5ec8 00:28:49.583 [2024-12-06 13:36:36.163478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.163494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.171121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef4f40 00:28:49.583 [2024-12-06 13:36:36.171876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.171893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.179489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eed4e8 00:28:49.583 [2024-12-06 13:36:36.180241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.180257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.188034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee5ec8 00:28:49.583 [2024-12-06 13:36:36.188764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.188780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.196781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef4f40 00:28:49.583 [2024-12-06 13:36:36.197742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.197758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.205062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eedd58 00:28:49.583 [2024-12-06 13:36:36.206049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.206065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.213444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef2510 00:28:49.583 [2024-12-06 13:36:36.214414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.214429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.221821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee1710 00:28:49.583 [2024-12-06 13:36:36.222753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.222770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.583 [2024-12-06 13:36:36.230181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efb480 00:28:49.583 [2024-12-06 13:36:36.231148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.583 [2024-12-06 13:36:36.231165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.844 [2024-12-06 13:36:36.238563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efa7d8 00:28:49.844 [2024-12-06 13:36:36.239542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.844 [2024-12-06 13:36:36.239558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.844 [2024-12-06 13:36:36.246936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efda78 00:28:49.844 [2024-12-06 13:36:36.247903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.844 [2024-12-06 13:36:36.247919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.844 [2024-12-06 13:36:36.255315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeb760 00:28:49.844 [2024-12-06 13:36:36.256294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.844 [2024-12-06 13:36:36.256310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.844 [2024-12-06 13:36:36.263699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efb048 00:28:49.844 [2024-12-06 13:36:36.264658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.844 [2024-12-06 13:36:36.264674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.844 [2024-12-06 13:36:36.272058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eecc78 00:28:49.844 [2024-12-06 13:36:36.273032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.844 [2024-12-06 13:36:36.273047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.844 [2024-12-06 13:36:36.280413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef0bc0 00:28:49.844 [2024-12-06 13:36:36.281383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.281399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.288789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee1f80 00:28:49.845 [2024-12-06 13:36:36.289766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.289782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.297196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee4140 00:28:49.845 [2024-12-06 13:36:36.298173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.298189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.305589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeaef0 00:28:49.845 [2024-12-06 13:36:36.306539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.306556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.313935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeff18 00:28:49.845 [2024-12-06 13:36:36.314910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.314925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.322296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016edece0 00:28:49.845 [2024-12-06 13:36:36.323263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.323279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.330654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef92c0 00:28:49.845 [2024-12-06 13:36:36.331625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.331641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.339033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef4f40 00:28:49.845 [2024-12-06 13:36:36.340004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.340020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.347401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eedd58 00:28:49.845 [2024-12-06 13:36:36.348379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.348395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.355771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef2510 00:28:49.845 [2024-12-06 13:36:36.356706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.356722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.364122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee1710 00:28:49.845 [2024-12-06 13:36:36.365092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.365114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.372487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efb480 00:28:49.845 [2024-12-06 13:36:36.373467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.373484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.380860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efa7d8 00:28:49.845 [2024-12-06 13:36:36.381833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.381849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.389250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efda78 00:28:49.845 [2024-12-06 13:36:36.390219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.390234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.397649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeb760 00:28:49.845 [2024-12-06 13:36:36.398575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.398591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.406028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efb048 00:28:49.845 [2024-12-06 13:36:36.406999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.407015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.414382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eecc78 00:28:49.845 [2024-12-06 13:36:36.415342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.415358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.422745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef0bc0 00:28:49.845 [2024-12-06 13:36:36.423699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.423715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.431111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee1f80 00:28:49.845 [2024-12-06 13:36:36.432077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.432093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.439508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee4140 00:28:49.845 [2024-12-06 13:36:36.440483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.440499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.447899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeaef0 00:28:49.845 [2024-12-06 13:36:36.448886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.448902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.456255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeff18 00:28:49.845 [2024-12-06 13:36:36.457244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.457260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.464615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016edece0 00:28:49.845 [2024-12-06 13:36:36.465592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.845 [2024-12-06 13:36:36.465608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.845 [2024-12-06 13:36:36.473009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef92c0 00:28:49.846 [2024-12-06 13:36:36.473974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.846 [2024-12-06 13:36:36.473990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.846 [2024-12-06 13:36:36.481379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef4f40 00:28:49.846 [2024-12-06 13:36:36.482350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.846 [2024-12-06 13:36:36.482366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.846 [2024-12-06 13:36:36.489805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eedd58 00:28:49.846 [2024-12-06 13:36:36.490776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.846 [2024-12-06 13:36:36.490792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:49.846 [2024-12-06 13:36:36.498179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef2510 00:28:49.846 [2024-12-06 13:36:36.499146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.846 [2024-12-06 13:36:36.499163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.506546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee1710 00:28:50.108 [2024-12-06 13:36:36.507507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.507524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.514938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efb480 00:28:50.108 [2024-12-06 13:36:36.515910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.515926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.523336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efa7d8 00:28:50.108 [2024-12-06 13:36:36.524277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.524293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.531725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efda78 00:28:50.108 [2024-12-06 13:36:36.532664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.532680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.540114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeb760 00:28:50.108 [2024-12-06 13:36:36.541084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.541100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.548485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efb048 00:28:50.108 [2024-12-06 13:36:36.549451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.549470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.556857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eecc78 00:28:50.108 [2024-12-06 13:36:36.557845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.557861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.565248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef0bc0 00:28:50.108 [2024-12-06 13:36:36.566236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.566252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.573660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee1f80 00:28:50.108 [2024-12-06 13:36:36.574640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.574655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.582040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee4140 00:28:50.108 [2024-12-06 13:36:36.583014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.583033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.590414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeaef0 00:28:50.108 [2024-12-06 13:36:36.591386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.591403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.598789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeff18 00:28:50.108 [2024-12-06 13:36:36.599725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.599741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.607161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016edece0 00:28:50.108 [2024-12-06 13:36:36.608093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.608109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.615557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef92c0 00:28:50.108 [2024-12-06 13:36:36.616541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.616557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.623938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef4f40 00:28:50.108 [2024-12-06 13:36:36.624925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.624941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.632319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eedd58 00:28:50.108 [2024-12-06 13:36:36.633290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.633306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.640708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef2510 00:28:50.108 [2024-12-06 13:36:36.641657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.641673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.649068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee1710 00:28:50.108 [2024-12-06 13:36:36.650055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.650071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.657443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efb480 00:28:50.108 [2024-12-06 13:36:36.658418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.658434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.108 [2024-12-06 13:36:36.665842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efa7d8 00:28:50.108 [2024-12-06 13:36:36.666787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.108 [2024-12-06 13:36:36.666803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.109 [2024-12-06 13:36:36.674224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efda78 00:28:50.109 [2024-12-06 13:36:36.675280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.109 [2024-12-06 13:36:36.675296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.109 [2024-12-06 13:36:36.682665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eeb760 00:28:50.109 [2024-12-06 13:36:36.683637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.109 [2024-12-06 13:36:36.683653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.109 [2024-12-06 13:36:36.691035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016efb048 00:28:50.109 [2024-12-06 13:36:36.692005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.109 [2024-12-06 13:36:36.692021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.109 [2024-12-06 13:36:36.699405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016eecc78 00:28:50.109 [2024-12-06 13:36:36.700353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.109 [2024-12-06 13:36:36.700369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.109 [2024-12-06 13:36:36.707787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef0bc0 00:28:50.109 [2024-12-06 13:36:36.708756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.109 [2024-12-06 13:36:36.708772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.109 [2024-12-06 13:36:36.716186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ee1f80 00:28:50.109 [2024-12-06 13:36:36.717162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.109 [2024-12-06 13:36:36.717178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:50.109 30287.50 IOPS, 118.31 MiB/s [2024-12-06T12:36:36.768Z] [2024-12-06 13:36:36.724820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf2eb0) with pdu=0x200016ef1ca0 00:28:50.109 [2024-12-06 13:36:36.725725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.109 [2024-12-06 13:36:36.725741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:50.109 00:28:50.109 Latency(us) 00:28:50.109 [2024-12-06T12:36:36.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.109 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:50.109 nvme0n1 : 2.01 30287.25 118.31 0.00 0.00 4220.34 2088.96 16493.23 00:28:50.109 [2024-12-06T12:36:36.768Z] =================================================================================================================== 00:28:50.109 [2024-12-06T12:36:36.768Z] Total : 30287.25 118.31 0.00 0.00 4220.34 2088.96 16493.23 00:28:50.109 { 00:28:50.109 "results": [ 00:28:50.109 { 00:28:50.109 "job": "nvme0n1", 00:28:50.109 "core_mask": "0x2", 00:28:50.109 "workload": "randwrite", 00:28:50.109 "status": "finished", 00:28:50.109 "queue_depth": 128, 00:28:50.109 "io_size": 4096, 00:28:50.109 "runtime": 2.00629, 00:28:50.109 "iops": 30287.246609413396, 00:28:50.109 "mibps": 118.30955706802108, 00:28:50.109 "io_failed": 0, 00:28:50.109 "io_timeout": 0, 00:28:50.109 "avg_latency_us": 4220.340377519954, 00:28:50.109 "min_latency_us": 2088.96, 00:28:50.109 "max_latency_us": 16493.226666666666 00:28:50.109 } 00:28:50.109 ], 00:28:50.109 "core_count": 1 00:28:50.109 } 00:28:50.109 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:50.109 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:50.109 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:50.109 | .driver_specific 00:28:50.109 | .nvme_error 00:28:50.109 | .status_code 00:28:50.109 | .command_transient_transport_error' 00:28:50.109 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:50.369 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 238 > 0 )) 00:28:50.369 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2325698 00:28:50.369 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2325698 ']' 00:28:50.369 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2325698 00:28:50.369 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:50.369 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.369 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2325698 00:28:50.369 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:50.369 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:50.369 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2325698' 00:28:50.369 killing process with pid 2325698 00:28:50.369 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2325698 00:28:50.369 Received shutdown signal, test time was about 2.000000 seconds 00:28:50.369 00:28:50.369 Latency(us) 00:28:50.369 [2024-12-06T12:36:37.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.369 [2024-12-06T12:36:37.028Z] =================================================================================================================== 00:28:50.369 [2024-12-06T12:36:37.028Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.369 13:36:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2325698 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2326528 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2326528 /var/tmp/bperf.sock 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2326528 ']' 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:50.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.629 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:50.629 [2024-12-06 13:36:37.149096] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:28:50.629 [2024-12-06 13:36:37.149153] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2326528 ] 00:28:50.629 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:50.629 Zero copy mechanism will not be used. 00:28:50.629 [2024-12-06 13:36:37.231722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.629 [2024-12-06 13:36:37.261129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.570 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.570 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:51.570 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:51.570 13:36:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:51.570 13:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:51.570 13:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.570 13:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:51.570 13:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.570 13:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.570 13:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.831 nvme0n1 00:28:51.831 13:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:51.831 13:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.831 13:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:51.831 13:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.831 13:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:51.831 13:36:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:52.091 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:52.091 Zero copy mechanism will not be used. 00:28:52.091 Running I/O for 2 seconds... 00:28:52.091 [2024-12-06 13:36:38.523225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.091 [2024-12-06 13:36:38.523370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.091 [2024-12-06 13:36:38.523394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.532105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.532406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.532425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.540859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.541154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.541172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.550957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.551213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.551229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.560151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.560211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.560228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.567966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.568045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.568061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.576521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.576867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.576884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.583383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.583433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.583461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.587181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.587483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.587500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.596349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.596397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.596413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.600914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.600972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.600988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.607434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.607735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.607751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.616143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.616203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.616218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.621557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.621617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.621632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.627986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.628244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.628260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.636222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.636503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.636519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.644823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.645097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.645113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.652900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.653193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.653209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.662517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.662795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.662811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.671186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.671259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.671275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.679683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.679980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.679997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.687413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.687482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.687497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.696665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.696726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.092 [2024-12-06 13:36:38.696741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.092 [2024-12-06 13:36:38.704870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.092 [2024-12-06 13:36:38.704932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.093 [2024-12-06 13:36:38.704947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.093 [2024-12-06 13:36:38.714559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.093 [2024-12-06 13:36:38.714618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.093 [2024-12-06 13:36:38.714633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.093 [2024-12-06 13:36:38.723113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.093 [2024-12-06 13:36:38.723184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.093 [2024-12-06 13:36:38.723200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.093 [2024-12-06 13:36:38.732528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.093 [2024-12-06 13:36:38.732757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.093 [2024-12-06 13:36:38.732772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.093 [2024-12-06 13:36:38.742729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.093 [2024-12-06 13:36:38.742807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.093 [2024-12-06 13:36:38.742823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.753194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.753424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.753440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.764859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.765145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.765162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.776384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.776680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.776695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.788090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.788336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.788351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.799273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.799520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.799536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.809550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.809895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.809915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.819593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.819991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.820007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.829152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.829438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.829459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.835590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.835864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.835881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.843271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.843615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.843632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.849638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.849865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.849880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.857539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.857766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.857781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.864785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.865056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.865073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.874528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.874737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.355 [2024-12-06 13:36:38.874753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.355 [2024-12-06 13:36:38.884685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.355 [2024-12-06 13:36:38.884982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.356 [2024-12-06 13:36:38.885001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.356 [2024-12-06 13:36:38.894681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.356 [2024-12-06 13:36:38.894937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.356 [2024-12-06 13:36:38.894953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.356 [2024-12-06 13:36:38.905365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.356 [2024-12-06 13:36:38.905565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.356 [2024-12-06 13:36:38.905581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.356 [2024-12-06 13:36:38.915892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.356 [2024-12-06 13:36:38.916047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.356 [2024-12-06 13:36:38.916062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.356 [2024-12-06 13:36:38.926868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.356 [2024-12-06 13:36:38.927061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.356 [2024-12-06 13:36:38.927077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.356 [2024-12-06 13:36:38.936837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.356 [2024-12-06 13:36:38.937084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.356 [2024-12-06 13:36:38.937100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.356 [2024-12-06 13:36:38.945453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.356 [2024-12-06 13:36:38.945763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.356 [2024-12-06 13:36:38.945779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.356 [2024-12-06 13:36:38.955177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.356 [2024-12-06 13:36:38.955509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.356 [2024-12-06 13:36:38.955525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.356 [2024-12-06 13:36:38.965583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.356 [2024-12-06 13:36:38.965828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.356 [2024-12-06 13:36:38.965843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.356 [2024-12-06 13:36:38.975545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.356 [2024-12-06 13:36:38.975841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.356 [2024-12-06 13:36:38.975857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.356 [2024-12-06 13:36:38.985914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.356 [2024-12-06 13:36:38.986186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.356 [2024-12-06 13:36:38.986202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.356 [2024-12-06 13:36:38.997165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.356 [2024-12-06 13:36:38.997308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.356 [2024-12-06 13:36:38.997324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.356 [2024-12-06 13:36:39.007311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.356 [2024-12-06 13:36:39.007595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.356 [2024-12-06 13:36:39.007611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.618 [2024-12-06 13:36:39.017256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.618 [2024-12-06 13:36:39.017523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.618 [2024-12-06 13:36:39.017539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.618 [2024-12-06 13:36:39.027836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.618 [2024-12-06 13:36:39.028089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.618 [2024-12-06 13:36:39.028105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.618 [2024-12-06 13:36:39.038147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.618 [2024-12-06 13:36:39.038329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.618 [2024-12-06 13:36:39.038344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.618 [2024-12-06 13:36:39.047830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.618 [2024-12-06 13:36:39.048190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.618 [2024-12-06 13:36:39.048207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.618 [2024-12-06 13:36:39.058057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.618 [2024-12-06 13:36:39.058347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.618 [2024-12-06 13:36:39.058363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.618 [2024-12-06 13:36:39.065566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.618 [2024-12-06 13:36:39.065687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.618 [2024-12-06 13:36:39.065702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.618 [2024-12-06 13:36:39.069241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.618 [2024-12-06 13:36:39.069384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.618 [2024-12-06 13:36:39.069400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.618 [2024-12-06 13:36:39.072041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.618 [2024-12-06 13:36:39.072196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.618 [2024-12-06 13:36:39.072211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.618 [2024-12-06 13:36:39.074800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.618 [2024-12-06 13:36:39.074962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.618 [2024-12-06 13:36:39.074977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.618 [2024-12-06 13:36:39.077548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.618 [2024-12-06 13:36:39.077696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.618 [2024-12-06 13:36:39.077712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.618 [2024-12-06 13:36:39.080957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.618 [2024-12-06 13:36:39.081165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.618 [2024-12-06 13:36:39.081180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.618 [2024-12-06 13:36:39.085915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.618 [2024-12-06 13:36:39.086224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.618 [2024-12-06 13:36:39.086240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.618 [2024-12-06 13:36:39.095703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.095825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.095841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.104826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.105132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.105151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.112935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.113045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.113061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.115978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.116110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.116125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.118944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.119075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.119090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.121624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.121740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.121755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.124281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.124398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.124413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.126927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.127064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.127080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.129577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.129696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.129711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.132300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.132421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.132436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.135330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.135459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.135475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.138263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.138388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.138403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.141190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.141327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.141342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.143673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.143802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.143817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.146152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.146288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.146304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.149338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.149478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.149493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.156425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.156706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.156722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.164777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.165082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.165099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.173412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.173552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.173568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.176099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.176228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.176243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.178919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.179047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.179063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.181528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.181657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.181673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.184124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.619 [2024-12-06 13:36:39.184253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.619 [2024-12-06 13:36:39.184269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.619 [2024-12-06 13:36:39.186918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.187051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.187066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.189993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.190120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.190137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.195607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.195757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.195773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.201060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.201190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.201206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.205356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.205490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.205509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.209001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.209128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.209144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.214593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.214723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.214738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.218537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.218668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.218683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.221910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.222038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.222054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.225461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.225590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.225606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.229231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.229359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.229375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.234670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.234796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.234812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.237891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.238018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.238034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.241042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.241176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.241191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.244167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.244296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.244311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.247270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.247400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.247416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.250274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.250404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.250420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.253366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.253498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.253514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.256309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.256435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.256451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.263281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.263409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.263424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.268084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.268213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.268229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.620 [2024-12-06 13:36:39.271861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.620 [2024-12-06 13:36:39.272007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.620 [2024-12-06 13:36:39.272023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.881 [2024-12-06 13:36:39.275574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.275703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.275718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.281436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.281575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.281590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.284473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.284604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.284619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.287133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.287262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.287278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.289808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.289938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.289955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.292515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.292644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.292660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.295164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.295294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.295310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.297718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.297845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.297861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.300308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.300439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.300463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.302846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.302977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.302993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.305784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.305917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.305933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.308341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.308475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.308491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.310823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.310957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.310972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.313427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.313558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.313574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.318367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.318496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.318512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.321557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.321732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.321747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.325210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.325538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.325555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.335379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.335715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.335732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.345016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.345240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.345255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.355561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.355859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.355875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.365987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.366284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.366301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.882 [2024-12-06 13:36:39.376584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.882 [2024-12-06 13:36:39.376822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.882 [2024-12-06 13:36:39.376838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.387262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.387489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.387505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.397581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.397728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.397744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.407579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.407854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.407871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.418224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.418512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.418527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.427933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.428168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.428184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.437663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.437926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.437943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.445012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.445155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.445171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.447799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.447934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.447950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.450480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.450610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.450626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.453315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.453445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.453466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.456225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.456352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.456369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.459313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.459441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.459462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.462448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.462583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.462603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.465591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.465719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.465735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.468723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.468855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.468871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.471542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.471674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.471690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.474580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.474708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.474723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.479427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.479703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.479719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.485137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.485272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.485288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.488008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.488136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.488152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.491836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.492065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.492081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.499021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.499386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.883 [2024-12-06 13:36:39.499403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.883 [2024-12-06 13:36:39.503039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.883 [2024-12-06 13:36:39.503169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.884 [2024-12-06 13:36:39.503185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.884 [2024-12-06 13:36:39.507575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.884 [2024-12-06 13:36:39.507702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.884 [2024-12-06 13:36:39.507718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.884 [2024-12-06 13:36:39.511130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.884 [2024-12-06 13:36:39.511264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.884 [2024-12-06 13:36:39.511280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.884 [2024-12-06 13:36:39.514661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.884 [2024-12-06 13:36:39.515734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.884 [2024-12-06 13:36:39.515751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:52.884 4927.00 IOPS, 615.88 MiB/s [2024-12-06T12:36:39.543Z] [2024-12-06 13:36:39.522247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.884 [2024-12-06 13:36:39.522315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.884 [2024-12-06 13:36:39.522330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:52.884 [2024-12-06 13:36:39.528688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.884 [2024-12-06 13:36:39.528739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.884 [2024-12-06 13:36:39.528754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:52.884 [2024-12-06 13:36:39.532747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.884 [2024-12-06 13:36:39.532789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.884 [2024-12-06 13:36:39.532805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:52.884 [2024-12-06 13:36:39.536964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:52.884 [2024-12-06 13:36:39.537010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.884 [2024-12-06 13:36:39.537025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.145 [2024-12-06 13:36:39.541210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.145 [2024-12-06 13:36:39.541255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.541270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.545337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.545382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.545397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.551673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.551739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.551754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.558497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.558566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.558581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.563442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.563495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.563510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.567341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.567385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.567400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.573624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.573685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.573700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.578536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.578581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.578597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.582269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.582325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.582343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.585985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.586031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.586046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.591487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.591536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.591551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.595098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.595141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.595156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.598816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.598875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.598890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.602176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.602223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.602238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.605495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.605539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.605554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.608879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.608927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.608942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.612170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.612223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.612238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.615593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.615641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.615657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.619062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.619105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.619120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.623188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.623261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.623276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.628605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.628668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.628683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.634928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.634973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.634988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.638393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.638444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.146 [2024-12-06 13:36:39.638465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.146 [2024-12-06 13:36:39.641625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.146 [2024-12-06 13:36:39.641668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.641683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.645185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.645228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.645244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.649752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.649794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.649810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.655060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.655112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.655127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.661118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.661160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.661175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.665027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.665072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.665087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.668707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.668751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.668766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.672141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.672192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.672207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.676049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.676093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.676108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.683255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.683528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.683545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.689205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.689263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.689279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.692880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.692927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.692944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.697342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.697388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.697403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.701719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.701762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.701778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.708474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.708549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.708564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.713511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.713814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.713829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.721827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.721894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.721909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.731689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.731965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.731981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.743232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.743458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.743474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.754187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.754436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.754451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.764544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.764805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.764821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.772332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.772375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.772390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.147 [2024-12-06 13:36:39.776592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.147 [2024-12-06 13:36:39.776659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.147 [2024-12-06 13:36:39.776675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.148 [2024-12-06 13:36:39.780919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.148 [2024-12-06 13:36:39.780964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.148 [2024-12-06 13:36:39.780979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.148 [2024-12-06 13:36:39.785081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.148 [2024-12-06 13:36:39.785153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.148 [2024-12-06 13:36:39.785168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.148 [2024-12-06 13:36:39.790037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.148 [2024-12-06 13:36:39.790109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.148 [2024-12-06 13:36:39.790125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.148 [2024-12-06 13:36:39.795474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.148 [2024-12-06 13:36:39.795566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.148 [2024-12-06 13:36:39.795582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.802914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.802958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.802974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.806763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.806806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.806822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.810197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.810250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.810265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.813175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.813236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.813250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.816050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.816104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.816119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.818888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.818939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.818954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.823691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.823736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.823751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.826584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.826644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.826659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.829781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.829842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.829857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.832919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.832977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.832992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.837715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.837778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.837796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.842465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.842512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.842528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.845597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.845664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.845679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.849268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.849312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.849327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.853762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.853812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.853828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.860046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.860091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.860106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.863856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.863900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.863915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.867482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.867565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.867581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.871369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.871452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.871471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.876572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.876618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.410 [2024-12-06 13:36:39.876636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.410 [2024-12-06 13:36:39.884814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.410 [2024-12-06 13:36:39.885065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.885086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.889330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.889381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.889397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.892785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.892832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.892847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.896099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.896144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.896159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.899411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.899486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.899501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.904007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.904052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.904068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.908882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.908939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.908954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.914176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.914223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.914238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.917520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.917570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.917585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.920745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.920788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.920803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.923671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.923727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.923742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.928299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.928363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.928378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.933130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.933177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.933193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.936227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.936287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.936302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.940253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.940331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.940346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.946836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.947058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.947073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.950463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.950519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.950535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.953688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.953755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.953770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.959393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.959441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.959462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.965816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.965883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.965898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.973762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.973990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.974005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.983972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.984183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.984198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:39.994413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:39.994704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.411 [2024-12-06 13:36:39.994720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.411 [2024-12-06 13:36:40.005167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.411 [2024-12-06 13:36:40.005361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.412 [2024-12-06 13:36:40.005379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.412 [2024-12-06 13:36:40.013375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.412 [2024-12-06 13:36:40.013443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.412 [2024-12-06 13:36:40.013466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.412 [2024-12-06 13:36:40.023219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.412 [2024-12-06 13:36:40.023472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.412 [2024-12-06 13:36:40.023491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.412 [2024-12-06 13:36:40.033772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.412 [2024-12-06 13:36:40.034043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.412 [2024-12-06 13:36:40.034060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.412 [2024-12-06 13:36:40.043345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.412 [2024-12-06 13:36:40.043593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.412 [2024-12-06 13:36:40.043610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.412 [2024-12-06 13:36:40.053857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.412 [2024-12-06 13:36:40.054128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.412 [2024-12-06 13:36:40.054145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.412 [2024-12-06 13:36:40.064431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.412 [2024-12-06 13:36:40.064649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.412 [2024-12-06 13:36:40.064664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.074995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.075134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.075149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.084464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.084652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.084668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.093412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.093517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.093533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.103450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.103711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.103727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.113587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.113872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.113887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.119288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.119368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.119384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.122978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.123110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.123125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.128461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.128529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.128545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.136660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.136740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.136756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.140962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.141120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.141135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.144922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.144982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.144997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.148263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.148308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.148324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.152353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.152398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.152414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.155648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.155743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.155759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.160938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.161121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.161137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.169528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.169592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.169607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.172684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.172729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.172745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.175924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.175998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.176013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.179039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.179101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.179117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.182515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.182627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.182642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.190391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.190611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.674 [2024-12-06 13:36:40.190626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.674 [2024-12-06 13:36:40.195451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.674 [2024-12-06 13:36:40.195600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.195619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.198981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.199098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.199113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.202637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.202743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.202758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.206268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.206352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.206367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.209702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.209789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.209804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.213198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.213280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.213295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.216756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.216859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.216874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.220309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.220404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.220419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.223068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.223138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.223153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.226931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.227129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.227144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.233058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.233117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.233132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.237002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.237257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.237272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.244281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.244355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.244370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.248110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.248164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.248179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.250934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.250993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.251008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.253869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.253956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.253971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.256642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.256696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.256711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.259447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.259502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.259517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.262087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.262140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.262155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.264614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.264662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.264677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.267095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.267140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.267155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.269556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.269609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.269624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.272065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.272106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.675 [2024-12-06 13:36:40.272122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.675 [2024-12-06 13:36:40.274535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.675 [2024-12-06 13:36:40.274584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.676 [2024-12-06 13:36:40.274600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.676 [2024-12-06 13:36:40.277853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.676 [2024-12-06 13:36:40.277954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.676 [2024-12-06 13:36:40.277971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.676 [2024-12-06 13:36:40.284775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.676 [2024-12-06 13:36:40.284819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.676 [2024-12-06 13:36:40.284834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.676 [2024-12-06 13:36:40.294001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.676 [2024-12-06 13:36:40.294294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.676 [2024-12-06 13:36:40.294312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.676 [2024-12-06 13:36:40.303587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.676 [2024-12-06 13:36:40.303889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.676 [2024-12-06 13:36:40.303905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.676 [2024-12-06 13:36:40.313740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.676 [2024-12-06 13:36:40.313964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.676 [2024-12-06 13:36:40.313979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.676 [2024-12-06 13:36:40.323274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.676 [2024-12-06 13:36:40.323528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.676 [2024-12-06 13:36:40.323543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.937 [2024-12-06 13:36:40.332641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.937 [2024-12-06 13:36:40.332910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.937 [2024-12-06 13:36:40.332926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.937 [2024-12-06 13:36:40.342240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.937 [2024-12-06 13:36:40.342309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.937 [2024-12-06 13:36:40.342324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.937 [2024-12-06 13:36:40.351938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.937 [2024-12-06 13:36:40.352012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.937 [2024-12-06 13:36:40.352027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.937 [2024-12-06 13:36:40.361068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.937 [2024-12-06 13:36:40.361391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.937 [2024-12-06 13:36:40.361407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.937 [2024-12-06 13:36:40.365395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.937 [2024-12-06 13:36:40.365451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.937 [2024-12-06 13:36:40.365472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.937 [2024-12-06 13:36:40.371314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.937 [2024-12-06 13:36:40.371612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.937 [2024-12-06 13:36:40.371627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.937 [2024-12-06 13:36:40.378091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.937 [2024-12-06 13:36:40.378366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.937 [2024-12-06 13:36:40.378383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.937 [2024-12-06 13:36:40.384235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.937 [2024-12-06 13:36:40.384288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.937 [2024-12-06 13:36:40.384303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.937 [2024-12-06 13:36:40.390754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.937 [2024-12-06 13:36:40.390809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.937 [2024-12-06 13:36:40.390824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.937 [2024-12-06 13:36:40.395402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.937 [2024-12-06 13:36:40.395488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.395503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.938 [2024-12-06 13:36:40.398856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.398930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.398945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.938 [2024-12-06 13:36:40.404417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.404491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.404506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.938 [2024-12-06 13:36:40.411758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.411855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.411870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.938 [2024-12-06 13:36:40.419301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.419606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.419622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.938 [2024-12-06 13:36:40.428461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.428729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.428744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.938 [2024-12-06 13:36:40.438795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.439073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.439089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.938 [2024-12-06 13:36:40.449832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.450101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.450117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.938 [2024-12-06 13:36:40.460130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.460243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.460258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.938 [2024-12-06 13:36:40.470093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.470305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.470321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.938 [2024-12-06 13:36:40.479849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.480120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.480136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.938 [2024-12-06 13:36:40.489585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.489804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.489819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:53.938 [2024-12-06 13:36:40.499669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.499960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.499976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:53.938 [2024-12-06 13:36:40.509633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.509917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.509935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:53.938 5234.50 IOPS, 654.31 MiB/s [2024-12-06T12:36:40.597Z] [2024-12-06 13:36:40.520621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xaf31f0) with pdu=0x200016eff3c8 00:28:53.938 [2024-12-06 13:36:40.520687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.938 [2024-12-06 13:36:40.520703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:53.938 00:28:53.938 Latency(us) 00:28:53.938 [2024-12-06T12:36:40.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.938 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:53.938 nvme0n1 : 2.01 5225.72 653.21 0.00 0.00 3055.27 1174.19 16056.32 00:28:53.938 [2024-12-06T12:36:40.597Z] =================================================================================================================== 00:28:53.938 [2024-12-06T12:36:40.597Z] Total : 5225.72 653.21 0.00 0.00 3055.27 1174.19 16056.32 00:28:53.938 { 00:28:53.938 "results": [ 00:28:53.938 { 00:28:53.938 "job": "nvme0n1", 00:28:53.938 "core_mask": "0x2", 00:28:53.938 "workload": "randwrite", 00:28:53.938 "status": "finished", 00:28:53.938 "queue_depth": 16, 00:28:53.938 "io_size": 131072, 00:28:53.938 "runtime": 2.006805, 00:28:53.938 "iops": 5225.71948943719, 00:28:53.938 "mibps": 653.2149361796487, 00:28:53.938 "io_failed": 0, 00:28:53.938 "io_timeout": 0, 00:28:53.938 "avg_latency_us": 3055.2715094879372, 00:28:53.938 "min_latency_us": 1174.1866666666667, 00:28:53.938 "max_latency_us": 16056.32 00:28:53.938 } 00:28:53.938 ], 00:28:53.938 "core_count": 1 00:28:53.938 } 00:28:53.938 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:53.938 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:53.938 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:53.938 | .driver_specific 00:28:53.938 | .nvme_error 00:28:53.938 | .status_code 00:28:53.938 | .command_transient_transport_error' 00:28:53.938 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:54.199 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 339 > 0 )) 00:28:54.199 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2326528 00:28:54.199 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2326528 ']' 00:28:54.199 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2326528 00:28:54.199 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:54.199 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.199 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2326528 00:28:54.199 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:54.199 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:54.199 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2326528' 00:28:54.199 killing process with pid 2326528 00:28:54.199 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2326528 00:28:54.199 Received shutdown signal, test time was about 2.000000 seconds 00:28:54.199 00:28:54.199 Latency(us) 00:28:54.199 [2024-12-06T12:36:40.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.199 [2024-12-06T12:36:40.858Z] =================================================================================================================== 00:28:54.199 [2024-12-06T12:36:40.858Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:54.199 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2326528 00:28:54.459 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2324087 00:28:54.459 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2324087 ']' 00:28:54.459 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2324087 00:28:54.459 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:54.459 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.459 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2324087 00:28:54.459 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:54.459 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:54.459 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2324087' 00:28:54.459 killing process with pid 2324087 00:28:54.459 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2324087 00:28:54.459 13:36:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2324087 00:28:54.459 00:28:54.459 real 0m16.400s 00:28:54.459 user 0m32.522s 00:28:54.459 sys 0m3.616s 00:28:54.459 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:54.460 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.460 ************************************ 00:28:54.460 END TEST nvmf_digest_error 00:28:54.460 ************************************ 00:28:54.460 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:54.460 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:54.460 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:54.460 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:54.460 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:54.460 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:54.460 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:54.460 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:54.720 rmmod nvme_tcp 00:28:54.720 rmmod nvme_fabrics 00:28:54.720 rmmod nvme_keyring 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2324087 ']' 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2324087 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2324087 ']' 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2324087 00:28:54.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2324087) - No such process 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2324087 is not found' 00:28:54.720 Process with pid 2324087 is not found 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.720 13:36:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.631 13:36:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:56.631 00:28:56.631 real 0m43.288s 00:28:56.631 user 1m7.969s 00:28:56.631 sys 0m13.249s 00:28:56.631 13:36:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.631 13:36:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:56.631 ************************************ 00:28:56.631 END TEST nvmf_digest 00:28:56.631 ************************************ 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.892 ************************************ 00:28:56.892 START TEST nvmf_bdevperf 00:28:56.892 ************************************ 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:56.892 * Looking for test storage... 00:28:56.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:56.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.892 --rc genhtml_branch_coverage=1 00:28:56.892 --rc genhtml_function_coverage=1 00:28:56.892 --rc genhtml_legend=1 00:28:56.892 --rc geninfo_all_blocks=1 00:28:56.892 --rc geninfo_unexecuted_blocks=1 00:28:56.892 00:28:56.892 ' 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:56.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.892 --rc genhtml_branch_coverage=1 00:28:56.892 --rc genhtml_function_coverage=1 00:28:56.892 --rc genhtml_legend=1 00:28:56.892 --rc geninfo_all_blocks=1 00:28:56.892 --rc geninfo_unexecuted_blocks=1 00:28:56.892 00:28:56.892 ' 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:56.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.892 --rc genhtml_branch_coverage=1 00:28:56.892 --rc genhtml_function_coverage=1 00:28:56.892 --rc genhtml_legend=1 00:28:56.892 --rc geninfo_all_blocks=1 00:28:56.892 --rc geninfo_unexecuted_blocks=1 00:28:56.892 00:28:56.892 ' 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:56.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.892 --rc genhtml_branch_coverage=1 00:28:56.892 --rc genhtml_function_coverage=1 00:28:56.892 --rc genhtml_legend=1 00:28:56.892 --rc geninfo_all_blocks=1 00:28:56.892 --rc geninfo_unexecuted_blocks=1 00:28:56.892 00:28:56.892 ' 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.892 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.893 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.893 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.893 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.893 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.893 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.893 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.893 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.893 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:57.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:57.155 13:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:05.289 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:05.289 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:05.289 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:05.289 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.289 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:05.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:29:05.290 00:29:05.290 --- 10.0.0.2 ping statistics --- 00:29:05.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.290 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:29:05.290 00:29:05.290 --- 10.0.0.1 ping statistics --- 00:29:05.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.290 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:05.290 13:36:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2331406 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2331406 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2331406 ']' 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.290 [2024-12-06 13:36:51.075211] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:29:05.290 [2024-12-06 13:36:51.075278] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.290 [2024-12-06 13:36:51.177684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:05.290 [2024-12-06 13:36:51.230033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.290 [2024-12-06 13:36:51.230089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.290 [2024-12-06 13:36:51.230097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.290 [2024-12-06 13:36:51.230104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.290 [2024-12-06 13:36:51.230111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.290 [2024-12-06 13:36:51.232049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.290 [2024-12-06 13:36:51.232209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.290 [2024-12-06 13:36:51.232212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:05.290 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.551 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.551 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:05.551 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.551 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.551 [2024-12-06 13:36:51.954438] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.551 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.551 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:05.551 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.551 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.551 Malloc0 00:29:05.551 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.551 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:05.551 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.551 13:36:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.551 [2024-12-06 13:36:52.026837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.551 { 00:29:05.551 "params": { 00:29:05.551 "name": "Nvme$subsystem", 00:29:05.551 "trtype": "$TEST_TRANSPORT", 00:29:05.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.551 "adrfam": "ipv4", 00:29:05.551 "trsvcid": "$NVMF_PORT", 00:29:05.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.551 "hdgst": ${hdgst:-false}, 00:29:05.551 "ddgst": ${ddgst:-false} 00:29:05.551 }, 00:29:05.551 "method": "bdev_nvme_attach_controller" 00:29:05.551 } 00:29:05.551 EOF 00:29:05.551 )") 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:05.551 13:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:05.551 "params": { 00:29:05.551 "name": "Nvme1", 00:29:05.551 "trtype": "tcp", 00:29:05.551 "traddr": "10.0.0.2", 00:29:05.551 "adrfam": "ipv4", 00:29:05.551 "trsvcid": "4420", 00:29:05.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:05.551 "hdgst": false, 00:29:05.551 "ddgst": false 00:29:05.551 }, 00:29:05.551 "method": "bdev_nvme_attach_controller" 00:29:05.551 }' 00:29:05.551 [2024-12-06 13:36:52.085017] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:29:05.551 [2024-12-06 13:36:52.085084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2331752 ] 00:29:05.551 [2024-12-06 13:36:52.175186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.811 [2024-12-06 13:36:52.228512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.072 Running I/O for 1 seconds... 00:29:07.017 8535.00 IOPS, 33.34 MiB/s 00:29:07.017 Latency(us) 00:29:07.017 [2024-12-06T12:36:53.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.017 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:07.017 Verification LBA range: start 0x0 length 0x4000 00:29:07.017 Nvme1n1 : 1.01 8603.23 33.61 0.00 0.00 14807.00 1884.16 13926.40 00:29:07.017 [2024-12-06T12:36:53.676Z] =================================================================================================================== 00:29:07.017 [2024-12-06T12:36:53.676Z] Total : 8603.23 33.61 0.00 0.00 14807.00 1884.16 13926.40 00:29:07.278 13:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2332062 00:29:07.278 13:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:07.278 13:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:07.278 13:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:07.278 13:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:07.278 13:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:07.278 13:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:07.278 13:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:07.278 { 00:29:07.278 "params": { 00:29:07.278 "name": "Nvme$subsystem", 00:29:07.278 "trtype": "$TEST_TRANSPORT", 00:29:07.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:07.278 "adrfam": "ipv4", 00:29:07.278 "trsvcid": "$NVMF_PORT", 00:29:07.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:07.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:07.278 "hdgst": ${hdgst:-false}, 00:29:07.278 "ddgst": ${ddgst:-false} 00:29:07.278 }, 00:29:07.278 "method": "bdev_nvme_attach_controller" 00:29:07.278 } 00:29:07.278 EOF 00:29:07.278 )") 00:29:07.278 13:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:07.278 13:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:07.278 13:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:07.278 13:36:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:07.278 "params": { 00:29:07.278 "name": "Nvme1", 00:29:07.278 "trtype": "tcp", 00:29:07.278 "traddr": "10.0.0.2", 00:29:07.278 "adrfam": "ipv4", 00:29:07.278 "trsvcid": "4420", 00:29:07.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:07.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:07.278 "hdgst": false, 00:29:07.278 "ddgst": false 00:29:07.278 }, 00:29:07.278 "method": "bdev_nvme_attach_controller" 00:29:07.278 }' 00:29:07.278 [2024-12-06 13:36:53.770123] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:29:07.278 [2024-12-06 13:36:53.770204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332062 ] 00:29:07.278 [2024-12-06 13:36:53.860739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.278 [2024-12-06 13:36:53.902918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.848 Running I/O for 15 seconds... 00:29:09.725 11374.00 IOPS, 44.43 MiB/s [2024-12-06T12:36:56.956Z] 11341.50 IOPS, 44.30 MiB/s [2024-12-06T12:36:56.956Z] 13:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2331406 00:29:10.297 13:36:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:10.297 [2024-12-06 13:36:56.732696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.732740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.732760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.732769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.732779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.732787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.732798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.732808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.732820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.732829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.732839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.732846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.732856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.732863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.732873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.732880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.732890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.732899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.732909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.732917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.732928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.732937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.732949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.732958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.732970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.732979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.732991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.733003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.733015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.733026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.733037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.733044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.733054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.733061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.733070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.297 [2024-12-06 13:36:56.733077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.297 [2024-12-06 13:36:56.733086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.298 [2024-12-06 13:36:56.733754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.298 [2024-12-06 13:36:56.733761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.733990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.733998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.734015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.734032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.734049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.734065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.734083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.734100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.734117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.734134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.299 [2024-12-06 13:36:56.734384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.299 [2024-12-06 13:36:56.734427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.299 [2024-12-06 13:36:56.734434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:10.300 [2024-12-06 13:36:56.734924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.734937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464ea0 is same with the state(6) to be set 00:29:10.300 [2024-12-06 13:36:56.734946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:10.300 [2024-12-06 13:36:56.734952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:10.300 [2024-12-06 13:36:56.734959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98624 len:8 PRP1 0x0 PRP2 0x0 00:29:10.300 [2024-12-06 13:36:56.734966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.300 [2024-12-06 13:36:56.738573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.300 [2024-12-06 13:36:56.738626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.300 [2024-12-06 13:36:56.739586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.300 [2024-12-06 13:36:56.739604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.300 [2024-12-06 13:36:56.739612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.300 [2024-12-06 13:36:56.739858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.300 [2024-12-06 13:36:56.740110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.300 [2024-12-06 13:36:56.740119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.300 [2024-12-06 13:36:56.740128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.300 [2024-12-06 13:36:56.740137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.300 [2024-12-06 13:36:56.753247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.300 [2024-12-06 13:36:56.753945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.300 [2024-12-06 13:36:56.753984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.300 [2024-12-06 13:36:56.753996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.300 [2024-12-06 13:36:56.754262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.300 [2024-12-06 13:36:56.754522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.300 [2024-12-06 13:36:56.754532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.300 [2024-12-06 13:36:56.754540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.300 [2024-12-06 13:36:56.754549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.300 [2024-12-06 13:36:56.767666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.301 [2024-12-06 13:36:56.768231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.301 [2024-12-06 13:36:56.768270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.301 [2024-12-06 13:36:56.768282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.301 [2024-12-06 13:36:56.768554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.301 [2024-12-06 13:36:56.768808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.301 [2024-12-06 13:36:56.768818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.301 [2024-12-06 13:36:56.768826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.301 [2024-12-06 13:36:56.768834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.301 [2024-12-06 13:36:56.781938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.301 [2024-12-06 13:36:56.782667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.301 [2024-12-06 13:36:56.782708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.301 [2024-12-06 13:36:56.782719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.301 [2024-12-06 13:36:56.782984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.301 [2024-12-06 13:36:56.783232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.301 [2024-12-06 13:36:56.783241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.301 [2024-12-06 13:36:56.783248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.301 [2024-12-06 13:36:56.783256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.301 [2024-12-06 13:36:56.796361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.301 [2024-12-06 13:36:56.796932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.301 [2024-12-06 13:36:56.796953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.301 [2024-12-06 13:36:56.796962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.301 [2024-12-06 13:36:56.797206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.301 [2024-12-06 13:36:56.797451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.301 [2024-12-06 13:36:56.797465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.301 [2024-12-06 13:36:56.797472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.301 [2024-12-06 13:36:56.797479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.301 [2024-12-06 13:36:56.810805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.301 [2024-12-06 13:36:56.811376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.301 [2024-12-06 13:36:56.811394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.301 [2024-12-06 13:36:56.811402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.301 [2024-12-06 13:36:56.811652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.301 [2024-12-06 13:36:56.811897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.301 [2024-12-06 13:36:56.811905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.301 [2024-12-06 13:36:56.811917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.301 [2024-12-06 13:36:56.811924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.301 [2024-12-06 13:36:56.825254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.301 [2024-12-06 13:36:56.825900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.301 [2024-12-06 13:36:56.825943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.301 [2024-12-06 13:36:56.825955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.301 [2024-12-06 13:36:56.826222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.301 [2024-12-06 13:36:56.826480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.301 [2024-12-06 13:36:56.826490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.301 [2024-12-06 13:36:56.826498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.301 [2024-12-06 13:36:56.826506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.301 [2024-12-06 13:36:56.839609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.301 [2024-12-06 13:36:56.840252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.301 [2024-12-06 13:36:56.840297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.301 [2024-12-06 13:36:56.840308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.301 [2024-12-06 13:36:56.840585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.301 [2024-12-06 13:36:56.840847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.301 [2024-12-06 13:36:56.840858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.301 [2024-12-06 13:36:56.840865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.301 [2024-12-06 13:36:56.840873] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.301 [2024-12-06 13:36:56.853991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.301 [2024-12-06 13:36:56.854741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.301 [2024-12-06 13:36:56.854787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.301 [2024-12-06 13:36:56.854799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.301 [2024-12-06 13:36:56.855067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.301 [2024-12-06 13:36:56.855315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.301 [2024-12-06 13:36:56.855326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.301 [2024-12-06 13:36:56.855334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.301 [2024-12-06 13:36:56.855342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.301 [2024-12-06 13:36:56.868471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.301 [2024-12-06 13:36:56.869157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.301 [2024-12-06 13:36:56.869206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.301 [2024-12-06 13:36:56.869218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.301 [2024-12-06 13:36:56.869498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.301 [2024-12-06 13:36:56.869748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.301 [2024-12-06 13:36:56.869757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.301 [2024-12-06 13:36:56.869766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.301 [2024-12-06 13:36:56.869774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.301 [2024-12-06 13:36:56.882905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.301 [2024-12-06 13:36:56.883520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.301 [2024-12-06 13:36:56.883554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.301 [2024-12-06 13:36:56.883564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.301 [2024-12-06 13:36:56.883821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.301 [2024-12-06 13:36:56.884068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.301 [2024-12-06 13:36:56.884076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.302 [2024-12-06 13:36:56.884084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.302 [2024-12-06 13:36:56.884091] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.302 [2024-12-06 13:36:56.897210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.302 [2024-12-06 13:36:56.897801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.302 [2024-12-06 13:36:56.897825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.302 [2024-12-06 13:36:56.897834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.302 [2024-12-06 13:36:56.898080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.302 [2024-12-06 13:36:56.898324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.302 [2024-12-06 13:36:56.898334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.302 [2024-12-06 13:36:56.898341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.302 [2024-12-06 13:36:56.898348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.302 [2024-12-06 13:36:56.911470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.302 [2024-12-06 13:36:56.912098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.302 [2024-12-06 13:36:56.912120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.302 [2024-12-06 13:36:56.912135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.302 [2024-12-06 13:36:56.912381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.302 [2024-12-06 13:36:56.912634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.302 [2024-12-06 13:36:56.912643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.302 [2024-12-06 13:36:56.912651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.302 [2024-12-06 13:36:56.912657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.302 [2024-12-06 13:36:56.925785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.302 [2024-12-06 13:36:56.926439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.302 [2024-12-06 13:36:56.926510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.302 [2024-12-06 13:36:56.926522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.302 [2024-12-06 13:36:56.926799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.302 [2024-12-06 13:36:56.927049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.302 [2024-12-06 13:36:56.927059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.302 [2024-12-06 13:36:56.927067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.302 [2024-12-06 13:36:56.927076] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.302 [2024-12-06 13:36:56.940216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.302 [2024-12-06 13:36:56.940791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.302 [2024-12-06 13:36:56.940820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.302 [2024-12-06 13:36:56.940829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.302 [2024-12-06 13:36:56.941076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.302 [2024-12-06 13:36:56.941336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.302 [2024-12-06 13:36:56.941346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.302 [2024-12-06 13:36:56.941354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.302 [2024-12-06 13:36:56.941361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.567 [2024-12-06 13:36:56.954508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.567 [2024-12-06 13:36:56.955227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-12-06 13:36:56.955289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.567 [2024-12-06 13:36:56.955302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.567 [2024-12-06 13:36:56.955594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.567 [2024-12-06 13:36:56.955854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.567 [2024-12-06 13:36:56.955864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.567 [2024-12-06 13:36:56.955872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.567 [2024-12-06 13:36:56.955881] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.567 [2024-12-06 13:36:56.968784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.567 [2024-12-06 13:36:56.969461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-12-06 13:36:56.969491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.567 [2024-12-06 13:36:56.969500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.567 [2024-12-06 13:36:56.969748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.567 [2024-12-06 13:36:56.969994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.567 [2024-12-06 13:36:56.970004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.567 [2024-12-06 13:36:56.970011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.567 [2024-12-06 13:36:56.970019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.567 [2024-12-06 13:36:56.983167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.567 [2024-12-06 13:36:56.983820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-12-06 13:36:56.983846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.567 [2024-12-06 13:36:56.983856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.567 [2024-12-06 13:36:56.984105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.567 [2024-12-06 13:36:56.984351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.567 [2024-12-06 13:36:56.984360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.567 [2024-12-06 13:36:56.984367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.567 [2024-12-06 13:36:56.984374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.567 [2024-12-06 13:36:56.997508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.567 [2024-12-06 13:36:56.998112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-12-06 13:36:56.998138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.567 [2024-12-06 13:36:56.998146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.567 [2024-12-06 13:36:56.998393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.567 [2024-12-06 13:36:56.998651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.567 [2024-12-06 13:36:56.998661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.567 [2024-12-06 13:36:56.998675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.567 [2024-12-06 13:36:56.998682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.567 [2024-12-06 13:36:57.011821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.567 [2024-12-06 13:36:57.012414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-12-06 13:36:57.012439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.567 [2024-12-06 13:36:57.012449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.567 [2024-12-06 13:36:57.012703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.567 [2024-12-06 13:36:57.012949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.567 [2024-12-06 13:36:57.012962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.012970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.012978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.026110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.026757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.026782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.026790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.027038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.027284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.027292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.027300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.027307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.040506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.041183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.041245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.041258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.041550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.041818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.041828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.041837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.041845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.054767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.055412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.055485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.055499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.055780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.056033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.056043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.056051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.056060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.069212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.069945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.070008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.070021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.070301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.070564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.070575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.070584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.070593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.083528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.084276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.084339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.084352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.084645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.084898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.084907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.084915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.084924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.097833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.098604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.098667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.098687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.098968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.099220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.099230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.099238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.099247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.112161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.112877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.112940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.112953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.113233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.113497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.113508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.113516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.113525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.126430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.127102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.127132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.127141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.127389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.127648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.127657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.127665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.127672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.140806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.141557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.141619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.141632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.141912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.142186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.142197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.142206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.142215] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.155139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.155772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.155802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.155811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.156060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.156305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.156314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.156321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.156329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.169473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.170178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.170239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.170252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.170546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.170800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.170809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.170819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.170828] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.183765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.184441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.184480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.184490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.184739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.184986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.184995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.185010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.185018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.198175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.198773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.198801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.198809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.199057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.199302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.199311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.199318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.199326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.568 [2024-12-06 13:36:57.212462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.568 [2024-12-06 13:36:57.213049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-12-06 13:36:57.213075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.568 [2024-12-06 13:36:57.213083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.568 [2024-12-06 13:36:57.213329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.568 [2024-12-06 13:36:57.213586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.568 [2024-12-06 13:36:57.213597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.568 [2024-12-06 13:36:57.213605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.568 [2024-12-06 13:36:57.213612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.828 9528.33 IOPS, 37.22 MiB/s [2024-12-06T12:36:57.487Z] [2024-12-06 13:36:57.228395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.828 [2024-12-06 13:36:57.229142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-12-06 13:36:57.229207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.828 [2024-12-06 13:36:57.229221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.828 [2024-12-06 13:36:57.229515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.828 [2024-12-06 13:36:57.229769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.828 [2024-12-06 13:36:57.229781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.828 [2024-12-06 13:36:57.229790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.828 [2024-12-06 13:36:57.229800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.828 [2024-12-06 13:36:57.242746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.828 [2024-12-06 13:36:57.243409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-12-06 13:36:57.243441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.828 [2024-12-06 13:36:57.243451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.828 [2024-12-06 13:36:57.243708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.828 [2024-12-06 13:36:57.243960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.828 [2024-12-06 13:36:57.243971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.828 [2024-12-06 13:36:57.243979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.828 [2024-12-06 13:36:57.243989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.828 [2024-12-06 13:36:57.257112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.828 [2024-12-06 13:36:57.257737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-12-06 13:36:57.257802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.828 [2024-12-06 13:36:57.257816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.828 [2024-12-06 13:36:57.258098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.828 [2024-12-06 13:36:57.258351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.828 [2024-12-06 13:36:57.258364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.828 [2024-12-06 13:36:57.258374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.828 [2024-12-06 13:36:57.258383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.828 [2024-12-06 13:36:57.271547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.828 [2024-12-06 13:36:57.272252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-12-06 13:36:57.272314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.828 [2024-12-06 13:36:57.272327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.828 [2024-12-06 13:36:57.272617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.828 [2024-12-06 13:36:57.272871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.829 [2024-12-06 13:36:57.272884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.829 [2024-12-06 13:36:57.272894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.829 [2024-12-06 13:36:57.272904] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.829 [2024-12-06 13:36:57.285829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.829 [2024-12-06 13:36:57.286535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-12-06 13:36:57.286576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.829 [2024-12-06 13:36:57.286593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.829 [2024-12-06 13:36:57.286855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.829 [2024-12-06 13:36:57.287105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.829 [2024-12-06 13:36:57.287117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.829 [2024-12-06 13:36:57.287125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.829 [2024-12-06 13:36:57.287133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.829 [2024-12-06 13:36:57.300253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.829 [2024-12-06 13:36:57.300995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-12-06 13:36:57.301049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.829 [2024-12-06 13:36:57.301061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.829 [2024-12-06 13:36:57.301335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.829 [2024-12-06 13:36:57.301597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.829 [2024-12-06 13:36:57.301609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.829 [2024-12-06 13:36:57.301618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.829 [2024-12-06 13:36:57.301627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.829 [2024-12-06 13:36:57.314513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.829 [2024-12-06 13:36:57.315202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-12-06 13:36:57.315253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.829 [2024-12-06 13:36:57.315265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.829 [2024-12-06 13:36:57.315546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.829 [2024-12-06 13:36:57.315799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.829 [2024-12-06 13:36:57.315810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.829 [2024-12-06 13:36:57.315818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.829 [2024-12-06 13:36:57.315827] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.829 [2024-12-06 13:36:57.328931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.829 [2024-12-06 13:36:57.329679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-12-06 13:36:57.329731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.829 [2024-12-06 13:36:57.329743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.829 [2024-12-06 13:36:57.330014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.829 [2024-12-06 13:36:57.330277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.829 [2024-12-06 13:36:57.330289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.829 [2024-12-06 13:36:57.330297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.829 [2024-12-06 13:36:57.330306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.829 [2024-12-06 13:36:57.343207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.829 [2024-12-06 13:36:57.343826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-12-06 13:36:57.343852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.829 [2024-12-06 13:36:57.343861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.829 [2024-12-06 13:36:57.344108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.829 [2024-12-06 13:36:57.344354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.829 [2024-12-06 13:36:57.344365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.829 [2024-12-06 13:36:57.344372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.829 [2024-12-06 13:36:57.344380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.829 [2024-12-06 13:36:57.357519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.829 [2024-12-06 13:36:57.358174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-12-06 13:36:57.358225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.829 [2024-12-06 13:36:57.358237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.829 [2024-12-06 13:36:57.358520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.829 [2024-12-06 13:36:57.358772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.829 [2024-12-06 13:36:57.358784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.829 [2024-12-06 13:36:57.358792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.829 [2024-12-06 13:36:57.358801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.829 [2024-12-06 13:36:57.371917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.829 [2024-12-06 13:36:57.372659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-12-06 13:36:57.372714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.829 [2024-12-06 13:36:57.372726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.829 [2024-12-06 13:36:57.372999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.829 [2024-12-06 13:36:57.373251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.829 [2024-12-06 13:36:57.373263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.829 [2024-12-06 13:36:57.373278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.829 [2024-12-06 13:36:57.373287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.829 [2024-12-06 13:36:57.386203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.829 [2024-12-06 13:36:57.386908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-12-06 13:36:57.386969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.829 [2024-12-06 13:36:57.386982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.829 [2024-12-06 13:36:57.387260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.829 [2024-12-06 13:36:57.387529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.829 [2024-12-06 13:36:57.387543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.829 [2024-12-06 13:36:57.387552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.829 [2024-12-06 13:36:57.387561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.829 [2024-12-06 13:36:57.400686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.829 [2024-12-06 13:36:57.401396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-12-06 13:36:57.401470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.829 [2024-12-06 13:36:57.401484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.829 [2024-12-06 13:36:57.401765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.829 [2024-12-06 13:36:57.402019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.829 [2024-12-06 13:36:57.402032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.829 [2024-12-06 13:36:57.402041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.829 [2024-12-06 13:36:57.402051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.829 [2024-12-06 13:36:57.414952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.829 [2024-12-06 13:36:57.415607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-12-06 13:36:57.415673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.829 [2024-12-06 13:36:57.415687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.829 [2024-12-06 13:36:57.415970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.829 [2024-12-06 13:36:57.416224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.830 [2024-12-06 13:36:57.416236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.830 [2024-12-06 13:36:57.416246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.830 [2024-12-06 13:36:57.416256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.830 [2024-12-06 13:36:57.429423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.830 [2024-12-06 13:36:57.430146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-12-06 13:36:57.430211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.830 [2024-12-06 13:36:57.430224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.830 [2024-12-06 13:36:57.430519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.830 [2024-12-06 13:36:57.430774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.830 [2024-12-06 13:36:57.430787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.830 [2024-12-06 13:36:57.430796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.830 [2024-12-06 13:36:57.430805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.830 [2024-12-06 13:36:57.442425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.830 [2024-12-06 13:36:57.443110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-12-06 13:36:57.443169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.830 [2024-12-06 13:36:57.443180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.830 [2024-12-06 13:36:57.443382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.830 [2024-12-06 13:36:57.443584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.830 [2024-12-06 13:36:57.443595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.830 [2024-12-06 13:36:57.443601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.830 [2024-12-06 13:36:57.443611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.830 [2024-12-06 13:36:57.455513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.830 [2024-12-06 13:36:57.456157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-12-06 13:36:57.456206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.830 [2024-12-06 13:36:57.456215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.830 [2024-12-06 13:36:57.456411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.830 [2024-12-06 13:36:57.456597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.830 [2024-12-06 13:36:57.456607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.830 [2024-12-06 13:36:57.456614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.830 [2024-12-06 13:36:57.456621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.830 [2024-12-06 13:36:57.468513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.830 [2024-12-06 13:36:57.469122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-12-06 13:36:57.469169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.830 [2024-12-06 13:36:57.469184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.830 [2024-12-06 13:36:57.469377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.830 [2024-12-06 13:36:57.469564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.830 [2024-12-06 13:36:57.469574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.830 [2024-12-06 13:36:57.469582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.830 [2024-12-06 13:36:57.469590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:10.830 [2024-12-06 13:36:57.481494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:10.830 [2024-12-06 13:36:57.482089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-12-06 13:36:57.482133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:10.830 [2024-12-06 13:36:57.482141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:10.830 [2024-12-06 13:36:57.482333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:10.830 [2024-12-06 13:36:57.482519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:10.830 [2024-12-06 13:36:57.482529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:10.830 [2024-12-06 13:36:57.482535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:10.830 [2024-12-06 13:36:57.482543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.091 [2024-12-06 13:36:57.494423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.091 [2024-12-06 13:36:57.494932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.091 [2024-12-06 13:36:57.494952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.091 [2024-12-06 13:36:57.494959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.091 [2024-12-06 13:36:57.495129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.091 [2024-12-06 13:36:57.495300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.091 [2024-12-06 13:36:57.495308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.091 [2024-12-06 13:36:57.495314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.091 [2024-12-06 13:36:57.495319] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.091 [2024-12-06 13:36:57.507361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.091 [2024-12-06 13:36:57.507901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.091 [2024-12-06 13:36:57.507918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.091 [2024-12-06 13:36:57.507924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.091 [2024-12-06 13:36:57.508093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.091 [2024-12-06 13:36:57.508268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.091 [2024-12-06 13:36:57.508276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.091 [2024-12-06 13:36:57.508281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.091 [2024-12-06 13:36:57.508287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.091 [2024-12-06 13:36:57.520311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.091 [2024-12-06 13:36:57.520830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.091 [2024-12-06 13:36:57.520845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.091 [2024-12-06 13:36:57.520851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.091 [2024-12-06 13:36:57.521020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.091 [2024-12-06 13:36:57.521189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.091 [2024-12-06 13:36:57.521197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.091 [2024-12-06 13:36:57.521203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.091 [2024-12-06 13:36:57.521208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.091 [2024-12-06 13:36:57.533240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.091 [2024-12-06 13:36:57.533860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.091 [2024-12-06 13:36:57.533896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.091 [2024-12-06 13:36:57.533905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.091 [2024-12-06 13:36:57.534092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.091 [2024-12-06 13:36:57.534264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.091 [2024-12-06 13:36:57.534272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.091 [2024-12-06 13:36:57.534278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.091 [2024-12-06 13:36:57.534284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.091 [2024-12-06 13:36:57.546169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.091 [2024-12-06 13:36:57.546743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.091 [2024-12-06 13:36:57.546778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.091 [2024-12-06 13:36:57.546787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.091 [2024-12-06 13:36:57.546973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.091 [2024-12-06 13:36:57.547146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.091 [2024-12-06 13:36:57.547153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.091 [2024-12-06 13:36:57.547163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.091 [2024-12-06 13:36:57.547169] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.091 [2024-12-06 13:36:57.559196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.091 [2024-12-06 13:36:57.559821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.091 [2024-12-06 13:36:57.559854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.091 [2024-12-06 13:36:57.559863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.091 [2024-12-06 13:36:57.560048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.091 [2024-12-06 13:36:57.560219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.091 [2024-12-06 13:36:57.560226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.091 [2024-12-06 13:36:57.560232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.091 [2024-12-06 13:36:57.560238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.091 [2024-12-06 13:36:57.572260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.091 [2024-12-06 13:36:57.572877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.091 [2024-12-06 13:36:57.572909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.091 [2024-12-06 13:36:57.572918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.091 [2024-12-06 13:36:57.573102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.091 [2024-12-06 13:36:57.573274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.091 [2024-12-06 13:36:57.573282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.091 [2024-12-06 13:36:57.573288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.091 [2024-12-06 13:36:57.573293] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.091 [2024-12-06 13:36:57.585322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.091 [2024-12-06 13:36:57.585941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.091 [2024-12-06 13:36:57.585972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.091 [2024-12-06 13:36:57.585981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.092 [2024-12-06 13:36:57.586165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.092 [2024-12-06 13:36:57.586336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.092 [2024-12-06 13:36:57.586344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.092 [2024-12-06 13:36:57.586349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.092 [2024-12-06 13:36:57.586355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.092 [2024-12-06 13:36:57.598386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.092 [2024-12-06 13:36:57.598961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.092 [2024-12-06 13:36:57.598993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.092 [2024-12-06 13:36:57.599002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.092 [2024-12-06 13:36:57.599186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.092 [2024-12-06 13:36:57.599356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.092 [2024-12-06 13:36:57.599364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.092 [2024-12-06 13:36:57.599370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.092 [2024-12-06 13:36:57.599376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.092 [2024-12-06 13:36:57.611395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.092 [2024-12-06 13:36:57.611878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.092 [2024-12-06 13:36:57.611909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.092 [2024-12-06 13:36:57.611918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.092 [2024-12-06 13:36:57.612102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.092 [2024-12-06 13:36:57.612273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.092 [2024-12-06 13:36:57.612280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.092 [2024-12-06 13:36:57.612286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.092 [2024-12-06 13:36:57.612292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.092 [2024-12-06 13:36:57.624320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.092 [2024-12-06 13:36:57.624941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.092 [2024-12-06 13:36:57.624973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.092 [2024-12-06 13:36:57.624982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.092 [2024-12-06 13:36:57.625166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.092 [2024-12-06 13:36:57.625337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.092 [2024-12-06 13:36:57.625344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.092 [2024-12-06 13:36:57.625349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.092 [2024-12-06 13:36:57.625355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.092 [2024-12-06 13:36:57.637381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.092 [2024-12-06 13:36:57.637994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.092 [2024-12-06 13:36:57.638026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.092 [2024-12-06 13:36:57.638037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.092 [2024-12-06 13:36:57.638221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.092 [2024-12-06 13:36:57.638393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.092 [2024-12-06 13:36:57.638400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.092 [2024-12-06 13:36:57.638406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.092 [2024-12-06 13:36:57.638412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.092 [2024-12-06 13:36:57.650441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.092 [2024-12-06 13:36:57.651007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.092 [2024-12-06 13:36:57.651038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.092 [2024-12-06 13:36:57.651046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.092 [2024-12-06 13:36:57.651230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.092 [2024-12-06 13:36:57.651402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.092 [2024-12-06 13:36:57.651409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.092 [2024-12-06 13:36:57.651415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.092 [2024-12-06 13:36:57.651421] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.092 [2024-12-06 13:36:57.663448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.092 [2024-12-06 13:36:57.664064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.092 [2024-12-06 13:36:57.664095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.092 [2024-12-06 13:36:57.664104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.092 [2024-12-06 13:36:57.664288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.092 [2024-12-06 13:36:57.664467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.092 [2024-12-06 13:36:57.664475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.092 [2024-12-06 13:36:57.664481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.092 [2024-12-06 13:36:57.664487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.092 [2024-12-06 13:36:57.676383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.092 [2024-12-06 13:36:57.677008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.092 [2024-12-06 13:36:57.677039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.092 [2024-12-06 13:36:57.677048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.092 [2024-12-06 13:36:57.677231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.092 [2024-12-06 13:36:57.677407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.092 [2024-12-06 13:36:57.677415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.092 [2024-12-06 13:36:57.677421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.092 [2024-12-06 13:36:57.677427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.092 [2024-12-06 13:36:57.689444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.092 [2024-12-06 13:36:57.690038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.092 [2024-12-06 13:36:57.690070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.092 [2024-12-06 13:36:57.690079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.092 [2024-12-06 13:36:57.690263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.092 [2024-12-06 13:36:57.690434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.092 [2024-12-06 13:36:57.690442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.092 [2024-12-06 13:36:57.690448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.092 [2024-12-06 13:36:57.690463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.092 [2024-12-06 13:36:57.702474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.092 [2024-12-06 13:36:57.702989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.092 [2024-12-06 13:36:57.703004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.092 [2024-12-06 13:36:57.703010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.092 [2024-12-06 13:36:57.703179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.093 [2024-12-06 13:36:57.703347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.093 [2024-12-06 13:36:57.703354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.093 [2024-12-06 13:36:57.703360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.093 [2024-12-06 13:36:57.703365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.093 [2024-12-06 13:36:57.715540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.093 [2024-12-06 13:36:57.716143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.093 [2024-12-06 13:36:57.716175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.093 [2024-12-06 13:36:57.716184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.093 [2024-12-06 13:36:57.716368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.093 [2024-12-06 13:36:57.716548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.093 [2024-12-06 13:36:57.716557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.093 [2024-12-06 13:36:57.716567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.093 [2024-12-06 13:36:57.716573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.093 [2024-12-06 13:36:57.728594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.093 [2024-12-06 13:36:57.729114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.093 [2024-12-06 13:36:57.729130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.093 [2024-12-06 13:36:57.729136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.093 [2024-12-06 13:36:57.729304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.093 [2024-12-06 13:36:57.729478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.093 [2024-12-06 13:36:57.729485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.093 [2024-12-06 13:36:57.729490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.093 [2024-12-06 13:36:57.729495] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.093 [2024-12-06 13:36:57.741512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.093 [2024-12-06 13:36:57.742019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.093 [2024-12-06 13:36:57.742032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.093 [2024-12-06 13:36:57.742039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.093 [2024-12-06 13:36:57.742206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.093 [2024-12-06 13:36:57.742375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.093 [2024-12-06 13:36:57.742382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.093 [2024-12-06 13:36:57.742387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.093 [2024-12-06 13:36:57.742392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.354 [2024-12-06 13:36:57.754426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.354 [2024-12-06 13:36:57.755039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.354 [2024-12-06 13:36:57.755070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.354 [2024-12-06 13:36:57.755079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.354 [2024-12-06 13:36:57.755263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.354 [2024-12-06 13:36:57.755435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.354 [2024-12-06 13:36:57.755443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.354 [2024-12-06 13:36:57.755449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.354 [2024-12-06 13:36:57.755463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.354 [2024-12-06 13:36:57.767418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.354 [2024-12-06 13:36:57.768057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.354 [2024-12-06 13:36:57.768089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.354 [2024-12-06 13:36:57.768097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.354 [2024-12-06 13:36:57.768281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.354 [2024-12-06 13:36:57.768453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.354 [2024-12-06 13:36:57.768467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.354 [2024-12-06 13:36:57.768473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.354 [2024-12-06 13:36:57.768479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.354 [2024-12-06 13:36:57.780341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.354 [2024-12-06 13:36:57.780829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.354 [2024-12-06 13:36:57.780845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.354 [2024-12-06 13:36:57.780851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.354 [2024-12-06 13:36:57.781020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.354 [2024-12-06 13:36:57.781188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.354 [2024-12-06 13:36:57.781195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.354 [2024-12-06 13:36:57.781200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.354 [2024-12-06 13:36:57.781205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.354 [2024-12-06 13:36:57.793377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.354 [2024-12-06 13:36:57.793853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.354 [2024-12-06 13:36:57.793883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.354 [2024-12-06 13:36:57.793892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.354 [2024-12-06 13:36:57.794076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.354 [2024-12-06 13:36:57.794247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.354 [2024-12-06 13:36:57.794254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.354 [2024-12-06 13:36:57.794260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.354 [2024-12-06 13:36:57.794266] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.354 [2024-12-06 13:36:57.806287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.354 [2024-12-06 13:36:57.806872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.354 [2024-12-06 13:36:57.806903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.354 [2024-12-06 13:36:57.806915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.354 [2024-12-06 13:36:57.807099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.354 [2024-12-06 13:36:57.807270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.354 [2024-12-06 13:36:57.807277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.354 [2024-12-06 13:36:57.807283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.354 [2024-12-06 13:36:57.807289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.354 [2024-12-06 13:36:57.819312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.354 [2024-12-06 13:36:57.819886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.354 [2024-12-06 13:36:57.819918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.354 [2024-12-06 13:36:57.819927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.354 [2024-12-06 13:36:57.820110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.354 [2024-12-06 13:36:57.820282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.354 [2024-12-06 13:36:57.820289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.355 [2024-12-06 13:36:57.820295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.355 [2024-12-06 13:36:57.820301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.355 [2024-12-06 13:36:57.832326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.355 [2024-12-06 13:36:57.832946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-12-06 13:36:57.832978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.355 [2024-12-06 13:36:57.832987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.355 [2024-12-06 13:36:57.833171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.355 [2024-12-06 13:36:57.833343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.355 [2024-12-06 13:36:57.833350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.355 [2024-12-06 13:36:57.833356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.355 [2024-12-06 13:36:57.833362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.355 [2024-12-06 13:36:57.845249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.355 [2024-12-06 13:36:57.845862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-12-06 13:36:57.845893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.355 [2024-12-06 13:36:57.845902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.355 [2024-12-06 13:36:57.846085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.355 [2024-12-06 13:36:57.846261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.355 [2024-12-06 13:36:57.846268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.355 [2024-12-06 13:36:57.846274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.355 [2024-12-06 13:36:57.846279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.355 [2024-12-06 13:36:57.858304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.355 [2024-12-06 13:36:57.858794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-12-06 13:36:57.858810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.355 [2024-12-06 13:36:57.858816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.355 [2024-12-06 13:36:57.858984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.355 [2024-12-06 13:36:57.859153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.355 [2024-12-06 13:36:57.859159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.355 [2024-12-06 13:36:57.859165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.355 [2024-12-06 13:36:57.859170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.355 [2024-12-06 13:36:57.871340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.355 [2024-12-06 13:36:57.871864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-12-06 13:36:57.871878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.355 [2024-12-06 13:36:57.871884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.355 [2024-12-06 13:36:57.872052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.355 [2024-12-06 13:36:57.872220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.355 [2024-12-06 13:36:57.872227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.355 [2024-12-06 13:36:57.872232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.355 [2024-12-06 13:36:57.872238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.355 [2024-12-06 13:36:57.884252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.355 [2024-12-06 13:36:57.884812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-12-06 13:36:57.884844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.355 [2024-12-06 13:36:57.884853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.355 [2024-12-06 13:36:57.885037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.355 [2024-12-06 13:36:57.885208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.355 [2024-12-06 13:36:57.885215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.355 [2024-12-06 13:36:57.885224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.355 [2024-12-06 13:36:57.885230] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.355 [2024-12-06 13:36:57.897250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.355 [2024-12-06 13:36:57.897848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-12-06 13:36:57.897880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.355 [2024-12-06 13:36:57.897888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.355 [2024-12-06 13:36:57.898072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.355 [2024-12-06 13:36:57.898243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.355 [2024-12-06 13:36:57.898251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.355 [2024-12-06 13:36:57.898256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.355 [2024-12-06 13:36:57.898262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.355 [2024-12-06 13:36:57.910287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.355 [2024-12-06 13:36:57.910786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-12-06 13:36:57.910802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.355 [2024-12-06 13:36:57.910808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.355 [2024-12-06 13:36:57.910977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.355 [2024-12-06 13:36:57.911145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.355 [2024-12-06 13:36:57.911152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.355 [2024-12-06 13:36:57.911157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.355 [2024-12-06 13:36:57.911162] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.355 [2024-12-06 13:36:57.923363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.355 [2024-12-06 13:36:57.923948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-12-06 13:36:57.923979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.355 [2024-12-06 13:36:57.923988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.355 [2024-12-06 13:36:57.924172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.355 [2024-12-06 13:36:57.924343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.355 [2024-12-06 13:36:57.924351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.355 [2024-12-06 13:36:57.924356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.355 [2024-12-06 13:36:57.924363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.355 [2024-12-06 13:36:57.936396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.355 [2024-12-06 13:36:57.937018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-12-06 13:36:57.937049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.355 [2024-12-06 13:36:57.937058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.355 [2024-12-06 13:36:57.937242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.355 [2024-12-06 13:36:57.937414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.355 [2024-12-06 13:36:57.937421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.355 [2024-12-06 13:36:57.937427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.355 [2024-12-06 13:36:57.937432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.355 [2024-12-06 13:36:57.949462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.355 [2024-12-06 13:36:57.950059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.355 [2024-12-06 13:36:57.950090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.356 [2024-12-06 13:36:57.950099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.356 [2024-12-06 13:36:57.950283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.356 [2024-12-06 13:36:57.950462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.356 [2024-12-06 13:36:57.950471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.356 [2024-12-06 13:36:57.950476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.356 [2024-12-06 13:36:57.950482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.356 [2024-12-06 13:36:57.962494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.356 [2024-12-06 13:36:57.963109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-12-06 13:36:57.963141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.356 [2024-12-06 13:36:57.963150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.356 [2024-12-06 13:36:57.963333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.356 [2024-12-06 13:36:57.963513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.356 [2024-12-06 13:36:57.963522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.356 [2024-12-06 13:36:57.963529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.356 [2024-12-06 13:36:57.963535] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.356 [2024-12-06 13:36:57.975555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.356 [2024-12-06 13:36:57.976141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-12-06 13:36:57.976173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.356 [2024-12-06 13:36:57.976189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.356 [2024-12-06 13:36:57.976373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.356 [2024-12-06 13:36:57.976552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.356 [2024-12-06 13:36:57.976560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.356 [2024-12-06 13:36:57.976566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.356 [2024-12-06 13:36:57.976572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.356 [2024-12-06 13:36:57.988581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.356 [2024-12-06 13:36:57.989064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-12-06 13:36:57.989080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.356 [2024-12-06 13:36:57.989086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.356 [2024-12-06 13:36:57.989254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.356 [2024-12-06 13:36:57.989422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.356 [2024-12-06 13:36:57.989429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.356 [2024-12-06 13:36:57.989434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.356 [2024-12-06 13:36:57.989439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.356 [2024-12-06 13:36:58.001607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.356 [2024-12-06 13:36:58.002113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.356 [2024-12-06 13:36:58.002127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.356 [2024-12-06 13:36:58.002133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.356 [2024-12-06 13:36:58.002301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.356 [2024-12-06 13:36:58.002475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.356 [2024-12-06 13:36:58.002482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.356 [2024-12-06 13:36:58.002488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.356 [2024-12-06 13:36:58.002492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.619 [2024-12-06 13:36:58.014670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.619 [2024-12-06 13:36:58.015162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.619 [2024-12-06 13:36:58.015177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.619 [2024-12-06 13:36:58.015183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.619 [2024-12-06 13:36:58.015352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.619 [2024-12-06 13:36:58.015530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.619 [2024-12-06 13:36:58.015537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.619 [2024-12-06 13:36:58.015542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.619 [2024-12-06 13:36:58.015547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.619 [2024-12-06 13:36:58.027739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.619 [2024-12-06 13:36:58.028236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.619 [2024-12-06 13:36:58.028250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.619 [2024-12-06 13:36:58.028256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.619 [2024-12-06 13:36:58.028423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.619 [2024-12-06 13:36:58.028597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.619 [2024-12-06 13:36:58.028604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.619 [2024-12-06 13:36:58.028609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.619 [2024-12-06 13:36:58.028614] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.619 [2024-12-06 13:36:58.040795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.619 [2024-12-06 13:36:58.041294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.619 [2024-12-06 13:36:58.041307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.619 [2024-12-06 13:36:58.041313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.619 [2024-12-06 13:36:58.041486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.619 [2024-12-06 13:36:58.041655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.619 [2024-12-06 13:36:58.041661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.619 [2024-12-06 13:36:58.041667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.619 [2024-12-06 13:36:58.041672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.619 [2024-12-06 13:36:58.053839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.619 [2024-12-06 13:36:58.054336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.619 [2024-12-06 13:36:58.054349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.619 [2024-12-06 13:36:58.054355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.619 [2024-12-06 13:36:58.054528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.619 [2024-12-06 13:36:58.054696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.619 [2024-12-06 13:36:58.054703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.619 [2024-12-06 13:36:58.054711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.619 [2024-12-06 13:36:58.054716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.619 [2024-12-06 13:36:58.066891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.619 [2024-12-06 13:36:58.067399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.619 [2024-12-06 13:36:58.067412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.619 [2024-12-06 13:36:58.067418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.619 [2024-12-06 13:36:58.067592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.619 [2024-12-06 13:36:58.067760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.619 [2024-12-06 13:36:58.067768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.619 [2024-12-06 13:36:58.067774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.619 [2024-12-06 13:36:58.067779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.619 [2024-12-06 13:36:58.079954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.619 [2024-12-06 13:36:58.080423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.619 [2024-12-06 13:36:58.080437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.619 [2024-12-06 13:36:58.080442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.619 [2024-12-06 13:36:58.080614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.619 [2024-12-06 13:36:58.080783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.619 [2024-12-06 13:36:58.080790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.619 [2024-12-06 13:36:58.080795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.619 [2024-12-06 13:36:58.080799] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.619 [2024-12-06 13:36:58.092966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.619 [2024-12-06 13:36:58.093476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.619 [2024-12-06 13:36:58.093490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.619 [2024-12-06 13:36:58.093495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.619 [2024-12-06 13:36:58.093663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.619 [2024-12-06 13:36:58.093831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.619 [2024-12-06 13:36:58.093838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.619 [2024-12-06 13:36:58.093843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.619 [2024-12-06 13:36:58.093848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.619 [2024-12-06 13:36:58.106018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.619 [2024-12-06 13:36:58.106634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.619 [2024-12-06 13:36:58.106665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.619 [2024-12-06 13:36:58.106674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.619 [2024-12-06 13:36:58.106858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.619 [2024-12-06 13:36:58.107029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.620 [2024-12-06 13:36:58.107036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.620 [2024-12-06 13:36:58.107042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.620 [2024-12-06 13:36:58.107048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.620 [2024-12-06 13:36:58.119075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.620 [2024-12-06 13:36:58.119693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.620 [2024-12-06 13:36:58.119725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.620 [2024-12-06 13:36:58.119734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.620 [2024-12-06 13:36:58.119918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.620 [2024-12-06 13:36:58.120089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.620 [2024-12-06 13:36:58.120096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.620 [2024-12-06 13:36:58.120102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.620 [2024-12-06 13:36:58.120108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.620 [2024-12-06 13:36:58.132126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.620 [2024-12-06 13:36:58.132752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.620 [2024-12-06 13:36:58.132783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.620 [2024-12-06 13:36:58.132792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.620 [2024-12-06 13:36:58.132976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.620 [2024-12-06 13:36:58.133147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.620 [2024-12-06 13:36:58.133154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.620 [2024-12-06 13:36:58.133160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.620 [2024-12-06 13:36:58.133166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.620 [2024-12-06 13:36:58.145188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.620 [2024-12-06 13:36:58.145805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.620 [2024-12-06 13:36:58.145836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.620 [2024-12-06 13:36:58.145848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.620 [2024-12-06 13:36:58.146032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.620 [2024-12-06 13:36:58.146211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.620 [2024-12-06 13:36:58.146219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.620 [2024-12-06 13:36:58.146225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.620 [2024-12-06 13:36:58.146230] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.620 [2024-12-06 13:36:58.158254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.620 [2024-12-06 13:36:58.158771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.620 [2024-12-06 13:36:58.158802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.620 [2024-12-06 13:36:58.158811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.620 [2024-12-06 13:36:58.158996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.620 [2024-12-06 13:36:58.159168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.620 [2024-12-06 13:36:58.159175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.620 [2024-12-06 13:36:58.159181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.620 [2024-12-06 13:36:58.159188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.620 [2024-12-06 13:36:58.171212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.620 [2024-12-06 13:36:58.171813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.620 [2024-12-06 13:36:58.171844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.620 [2024-12-06 13:36:58.171853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.620 [2024-12-06 13:36:58.172036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.620 [2024-12-06 13:36:58.172208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.620 [2024-12-06 13:36:58.172216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.620 [2024-12-06 13:36:58.172222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.620 [2024-12-06 13:36:58.172228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.620 [2024-12-06 13:36:58.184279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.620 [2024-12-06 13:36:58.184836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.620 [2024-12-06 13:36:58.184868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.620 [2024-12-06 13:36:58.184877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.620 [2024-12-06 13:36:58.185061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.620 [2024-12-06 13:36:58.185236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.620 [2024-12-06 13:36:58.185243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.620 [2024-12-06 13:36:58.185249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.620 [2024-12-06 13:36:58.185255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.620 [2024-12-06 13:36:58.197274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.620 [2024-12-06 13:36:58.197853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.620 [2024-12-06 13:36:58.197885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.620 [2024-12-06 13:36:58.197894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.620 [2024-12-06 13:36:58.198078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.620 [2024-12-06 13:36:58.198249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.620 [2024-12-06 13:36:58.198257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.620 [2024-12-06 13:36:58.198263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.620 [2024-12-06 13:36:58.198269] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.620 [2024-12-06 13:36:58.210290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.620 [2024-12-06 13:36:58.210898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.620 [2024-12-06 13:36:58.210930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.620 [2024-12-06 13:36:58.210938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.620 [2024-12-06 13:36:58.211122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.620 [2024-12-06 13:36:58.211294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.620 [2024-12-06 13:36:58.211301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.620 [2024-12-06 13:36:58.211307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.620 [2024-12-06 13:36:58.211313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.620 [2024-12-06 13:36:58.223355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.620 [2024-12-06 13:36:58.223912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.620 [2024-12-06 13:36:58.223943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.620 [2024-12-06 13:36:58.223952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.620 [2024-12-06 13:36:58.224135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.620 [2024-12-06 13:36:58.224306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.620 [2024-12-06 13:36:58.224314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.620 [2024-12-06 13:36:58.224324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.620 [2024-12-06 13:36:58.224330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.621 7146.25 IOPS, 27.92 MiB/s [2024-12-06T12:36:58.280Z] [2024-12-06 13:36:58.236355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.621 [2024-12-06 13:36:58.236974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.621 [2024-12-06 13:36:58.237005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.621 [2024-12-06 13:36:58.237015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.621 [2024-12-06 13:36:58.237198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.621 [2024-12-06 13:36:58.237370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.621 [2024-12-06 13:36:58.237377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.621 [2024-12-06 13:36:58.237383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.621 [2024-12-06 13:36:58.237389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.621 [2024-12-06 13:36:58.249422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.621 [2024-12-06 13:36:58.250042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.621 [2024-12-06 13:36:58.250074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.621 [2024-12-06 13:36:58.250083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.621 [2024-12-06 13:36:58.250267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.621 [2024-12-06 13:36:58.250438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.621 [2024-12-06 13:36:58.250445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.621 [2024-12-06 13:36:58.250451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.621 [2024-12-06 13:36:58.250467] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.621 [2024-12-06 13:36:58.262479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.621 [2024-12-06 13:36:58.263091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.621 [2024-12-06 13:36:58.263122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.621 [2024-12-06 13:36:58.263131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.621 [2024-12-06 13:36:58.263314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.621 [2024-12-06 13:36:58.263494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.621 [2024-12-06 13:36:58.263502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.621 [2024-12-06 13:36:58.263508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.621 [2024-12-06 13:36:58.263514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.883 [2024-12-06 13:36:58.275555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.883 [2024-12-06 13:36:58.276131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.883 [2024-12-06 13:36:58.276163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.883 [2024-12-06 13:36:58.276172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.883 [2024-12-06 13:36:58.276356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.883 [2024-12-06 13:36:58.276537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.883 [2024-12-06 13:36:58.276546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.883 [2024-12-06 13:36:58.276552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.883 [2024-12-06 13:36:58.276558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.883 [2024-12-06 13:36:58.288597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.883 [2024-12-06 13:36:58.289068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.883 [2024-12-06 13:36:58.289083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.883 [2024-12-06 13:36:58.289089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.883 [2024-12-06 13:36:58.289258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.883 [2024-12-06 13:36:58.289427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.883 [2024-12-06 13:36:58.289435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.883 [2024-12-06 13:36:58.289440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.883 [2024-12-06 13:36:58.289445] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.883 [2024-12-06 13:36:58.301622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.883 [2024-12-06 13:36:58.302088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.883 [2024-12-06 13:36:58.302102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.883 [2024-12-06 13:36:58.302107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.883 [2024-12-06 13:36:58.302275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.883 [2024-12-06 13:36:58.302443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.883 [2024-12-06 13:36:58.302450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.883 [2024-12-06 13:36:58.302461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.883 [2024-12-06 13:36:58.302467] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.883 [2024-12-06 13:36:58.314647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.883 [2024-12-06 13:36:58.315156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.883 [2024-12-06 13:36:58.315169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.883 [2024-12-06 13:36:58.315178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.883 [2024-12-06 13:36:58.315346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.883 [2024-12-06 13:36:58.315519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.883 [2024-12-06 13:36:58.315526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.883 [2024-12-06 13:36:58.315532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.883 [2024-12-06 13:36:58.315537] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.883 [2024-12-06 13:36:58.327567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.883 [2024-12-06 13:36:58.328026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.883 [2024-12-06 13:36:58.328039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.883 [2024-12-06 13:36:58.328044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.883 [2024-12-06 13:36:58.328212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.883 [2024-12-06 13:36:58.328380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.883 [2024-12-06 13:36:58.328387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.883 [2024-12-06 13:36:58.328392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.883 [2024-12-06 13:36:58.328398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.883 [2024-12-06 13:36:58.340596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.883 [2024-12-06 13:36:58.341099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.883 [2024-12-06 13:36:58.341113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.883 [2024-12-06 13:36:58.341118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.883 [2024-12-06 13:36:58.341287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.883 [2024-12-06 13:36:58.341460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.883 [2024-12-06 13:36:58.341467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.883 [2024-12-06 13:36:58.341473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.883 [2024-12-06 13:36:58.341478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.883 [2024-12-06 13:36:58.353520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.883 [2024-12-06 13:36:58.354025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.883 [2024-12-06 13:36:58.354038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.883 [2024-12-06 13:36:58.354044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.883 [2024-12-06 13:36:58.354211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.883 [2024-12-06 13:36:58.354383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.883 [2024-12-06 13:36:58.354389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.883 [2024-12-06 13:36:58.354395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.883 [2024-12-06 13:36:58.354400] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.883 [2024-12-06 13:36:58.366430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.883 [2024-12-06 13:36:58.366942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.883 [2024-12-06 13:36:58.366956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.883 [2024-12-06 13:36:58.366962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.883 [2024-12-06 13:36:58.367130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.883 [2024-12-06 13:36:58.367298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.883 [2024-12-06 13:36:58.367304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.883 [2024-12-06 13:36:58.367310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.883 [2024-12-06 13:36:58.367315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.883 [2024-12-06 13:36:58.379350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.883 [2024-12-06 13:36:58.379859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.883 [2024-12-06 13:36:58.379872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.883 [2024-12-06 13:36:58.379878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.883 [2024-12-06 13:36:58.380046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.883 [2024-12-06 13:36:58.380214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.883 [2024-12-06 13:36:58.380221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.883 [2024-12-06 13:36:58.380226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.883 [2024-12-06 13:36:58.380231] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.883 [2024-12-06 13:36:58.392410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.883 [2024-12-06 13:36:58.392927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.883 [2024-12-06 13:36:58.392941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.883 [2024-12-06 13:36:58.392946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.883 [2024-12-06 13:36:58.393114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.883 [2024-12-06 13:36:58.393282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.883 [2024-12-06 13:36:58.393289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.883 [2024-12-06 13:36:58.393298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.883 [2024-12-06 13:36:58.393302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.883 [2024-12-06 13:36:58.405329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.883 [2024-12-06 13:36:58.405813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.883 [2024-12-06 13:36:58.405827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.883 [2024-12-06 13:36:58.405832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.883 [2024-12-06 13:36:58.406000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.883 [2024-12-06 13:36:58.406168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.883 [2024-12-06 13:36:58.406175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.883 [2024-12-06 13:36:58.406180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.883 [2024-12-06 13:36:58.406184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.883 [2024-12-06 13:36:58.418364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.883 [2024-12-06 13:36:58.418847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.883 [2024-12-06 13:36:58.418860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.883 [2024-12-06 13:36:58.418866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.883 [2024-12-06 13:36:58.419033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.883 [2024-12-06 13:36:58.419201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.883 [2024-12-06 13:36:58.419208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.884 [2024-12-06 13:36:58.419213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.884 [2024-12-06 13:36:58.419218] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.884 [2024-12-06 13:36:58.431392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.884 [2024-12-06 13:36:58.431891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.884 [2024-12-06 13:36:58.431905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.884 [2024-12-06 13:36:58.431910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.884 [2024-12-06 13:36:58.432078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.884 [2024-12-06 13:36:58.432246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.884 [2024-12-06 13:36:58.432253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.884 [2024-12-06 13:36:58.432258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.884 [2024-12-06 13:36:58.432263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.884 [2024-12-06 13:36:58.444439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.884 [2024-12-06 13:36:58.444895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.884 [2024-12-06 13:36:58.444908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.884 [2024-12-06 13:36:58.444914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.884 [2024-12-06 13:36:58.445081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.884 [2024-12-06 13:36:58.445249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.884 [2024-12-06 13:36:58.445255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.884 [2024-12-06 13:36:58.445266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.884 [2024-12-06 13:36:58.445279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.884 [2024-12-06 13:36:58.457490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.884 [2024-12-06 13:36:58.457873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.884 [2024-12-06 13:36:58.457886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.884 [2024-12-06 13:36:58.457892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.884 [2024-12-06 13:36:58.458060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.884 [2024-12-06 13:36:58.458227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.884 [2024-12-06 13:36:58.458234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.884 [2024-12-06 13:36:58.458239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.884 [2024-12-06 13:36:58.458244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.884 [2024-12-06 13:36:58.470422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.884 [2024-12-06 13:36:58.470928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.884 [2024-12-06 13:36:58.470942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.884 [2024-12-06 13:36:58.470947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.884 [2024-12-06 13:36:58.471115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.884 [2024-12-06 13:36:58.471283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.884 [2024-12-06 13:36:58.471290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.884 [2024-12-06 13:36:58.471296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.884 [2024-12-06 13:36:58.471301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.884 [2024-12-06 13:36:58.483482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.884 [2024-12-06 13:36:58.483948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.884 [2024-12-06 13:36:58.483961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.884 [2024-12-06 13:36:58.483970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.884 [2024-12-06 13:36:58.484137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.884 [2024-12-06 13:36:58.484305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.884 [2024-12-06 13:36:58.484312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.884 [2024-12-06 13:36:58.484317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.884 [2024-12-06 13:36:58.484322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.884 [2024-12-06 13:36:58.496501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.884 [2024-12-06 13:36:58.496958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.884 [2024-12-06 13:36:58.496971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.884 [2024-12-06 13:36:58.496977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.884 [2024-12-06 13:36:58.497145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.884 [2024-12-06 13:36:58.497313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.884 [2024-12-06 13:36:58.497320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.884 [2024-12-06 13:36:58.497325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.884 [2024-12-06 13:36:58.497330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.884 [2024-12-06 13:36:58.509509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.884 [2024-12-06 13:36:58.509991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.884 [2024-12-06 13:36:58.510003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.884 [2024-12-06 13:36:58.510009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.884 [2024-12-06 13:36:58.510176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.884 [2024-12-06 13:36:58.510344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.884 [2024-12-06 13:36:58.510351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.884 [2024-12-06 13:36:58.510356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.884 [2024-12-06 13:36:58.510361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.884 [2024-12-06 13:36:58.522538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.884 [2024-12-06 13:36:58.522915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.884 [2024-12-06 13:36:58.522928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.884 [2024-12-06 13:36:58.522933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.884 [2024-12-06 13:36:58.523100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.884 [2024-12-06 13:36:58.523272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.884 [2024-12-06 13:36:58.523279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.884 [2024-12-06 13:36:58.523284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.884 [2024-12-06 13:36:58.523289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:11.884 [2024-12-06 13:36:58.535478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:11.884 [2024-12-06 13:36:58.535954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.884 [2024-12-06 13:36:58.535967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:11.884 [2024-12-06 13:36:58.535974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:11.884 [2024-12-06 13:36:58.536142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:11.884 [2024-12-06 13:36:58.536310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:11.884 [2024-12-06 13:36:58.536317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:11.884 [2024-12-06 13:36:58.536323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:11.884 [2024-12-06 13:36:58.536328] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.146 [2024-12-06 13:36:58.548527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.146 [2024-12-06 13:36:58.549048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.146 [2024-12-06 13:36:58.549062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.146 [2024-12-06 13:36:58.549069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.146 [2024-12-06 13:36:58.549237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.146 [2024-12-06 13:36:58.549405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.146 [2024-12-06 13:36:58.549412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.146 [2024-12-06 13:36:58.549417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.146 [2024-12-06 13:36:58.549422] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.146 [2024-12-06 13:36:58.561448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.146 [2024-12-06 13:36:58.561923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.146 [2024-12-06 13:36:58.561936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.146 [2024-12-06 13:36:58.561942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.146 [2024-12-06 13:36:58.562109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.146 [2024-12-06 13:36:58.562277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.146 [2024-12-06 13:36:58.562285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.146 [2024-12-06 13:36:58.562293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.146 [2024-12-06 13:36:58.562298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.146 [2024-12-06 13:36:58.574489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.146 [2024-12-06 13:36:58.574966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.146 [2024-12-06 13:36:58.574979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.146 [2024-12-06 13:36:58.574985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.146 [2024-12-06 13:36:58.575152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.146 [2024-12-06 13:36:58.575320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.146 [2024-12-06 13:36:58.575327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.146 [2024-12-06 13:36:58.575332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.146 [2024-12-06 13:36:58.575337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.146 [2024-12-06 13:36:58.587535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.146 [2024-12-06 13:36:58.588038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.146 [2024-12-06 13:36:58.588051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.146 [2024-12-06 13:36:58.588057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.146 [2024-12-06 13:36:58.588224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.146 [2024-12-06 13:36:58.588392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.146 [2024-12-06 13:36:58.588399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.146 [2024-12-06 13:36:58.588404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.146 [2024-12-06 13:36:58.588409] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.146 [2024-12-06 13:36:58.600601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.146 [2024-12-06 13:36:58.601089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.146 [2024-12-06 13:36:58.601102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.146 [2024-12-06 13:36:58.601107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.146 [2024-12-06 13:36:58.601275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.146 [2024-12-06 13:36:58.601443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.146 [2024-12-06 13:36:58.601449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.146 [2024-12-06 13:36:58.601460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.146 [2024-12-06 13:36:58.601465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.146 [2024-12-06 13:36:58.613655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.146 [2024-12-06 13:36:58.614056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.146 [2024-12-06 13:36:58.614069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.146 [2024-12-06 13:36:58.614074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.146 [2024-12-06 13:36:58.614242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.146 [2024-12-06 13:36:58.614410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.146 [2024-12-06 13:36:58.614417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.146 [2024-12-06 13:36:58.614422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.146 [2024-12-06 13:36:58.614427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.146 [2024-12-06 13:36:58.626612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.146 [2024-12-06 13:36:58.627124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.146 [2024-12-06 13:36:58.627137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.146 [2024-12-06 13:36:58.627142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.147 [2024-12-06 13:36:58.627310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.147 [2024-12-06 13:36:58.627483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.147 [2024-12-06 13:36:58.627491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.147 [2024-12-06 13:36:58.627496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.147 [2024-12-06 13:36:58.627501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.147 [2024-12-06 13:36:58.639523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.147 [2024-12-06 13:36:58.640027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.147 [2024-12-06 13:36:58.640040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.147 [2024-12-06 13:36:58.640045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.147 [2024-12-06 13:36:58.640213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.147 [2024-12-06 13:36:58.640381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.147 [2024-12-06 13:36:58.640387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.147 [2024-12-06 13:36:58.640392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.147 [2024-12-06 13:36:58.640397] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.147 [2024-12-06 13:36:58.652428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.147 [2024-12-06 13:36:58.652888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.147 [2024-12-06 13:36:58.652903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.147 [2024-12-06 13:36:58.652911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.147 [2024-12-06 13:36:58.653079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.147 [2024-12-06 13:36:58.653247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.147 [2024-12-06 13:36:58.653254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.147 [2024-12-06 13:36:58.653259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.147 [2024-12-06 13:36:58.653265] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.147 [2024-12-06 13:36:58.665463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.147 [2024-12-06 13:36:58.665970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.147 [2024-12-06 13:36:58.665984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.147 [2024-12-06 13:36:58.665990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.147 [2024-12-06 13:36:58.666157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.147 [2024-12-06 13:36:58.666326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.147 [2024-12-06 13:36:58.666332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.147 [2024-12-06 13:36:58.666337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.147 [2024-12-06 13:36:58.666343] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.147 [2024-12-06 13:36:58.678374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.147 [2024-12-06 13:36:58.678820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.147 [2024-12-06 13:36:58.678833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.147 [2024-12-06 13:36:58.678839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.147 [2024-12-06 13:36:58.679007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.147 [2024-12-06 13:36:58.679174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.147 [2024-12-06 13:36:58.679181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.147 [2024-12-06 13:36:58.679186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.147 [2024-12-06 13:36:58.679191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.147 [2024-12-06 13:36:58.691446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.147 [2024-12-06 13:36:58.691957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.147 [2024-12-06 13:36:58.691969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.147 [2024-12-06 13:36:58.691975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.147 [2024-12-06 13:36:58.692143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.147 [2024-12-06 13:36:58.692315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.147 [2024-12-06 13:36:58.692322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.147 [2024-12-06 13:36:58.692327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.147 [2024-12-06 13:36:58.692332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.147 [2024-12-06 13:36:58.704362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.147 [2024-12-06 13:36:58.704853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.147 [2024-12-06 13:36:58.704867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.147 [2024-12-06 13:36:58.704872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.147 [2024-12-06 13:36:58.705041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.147 [2024-12-06 13:36:58.705209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.147 [2024-12-06 13:36:58.705216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.147 [2024-12-06 13:36:58.705221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.147 [2024-12-06 13:36:58.705226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.147 [2024-12-06 13:36:58.717422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.147 [2024-12-06 13:36:58.717931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.147 [2024-12-06 13:36:58.717946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.147 [2024-12-06 13:36:58.717951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.147 [2024-12-06 13:36:58.718119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.147 [2024-12-06 13:36:58.718287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.147 [2024-12-06 13:36:58.718295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.147 [2024-12-06 13:36:58.718300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.147 [2024-12-06 13:36:58.718305] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.147 [2024-12-06 13:36:58.730342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.147 [2024-12-06 13:36:58.730884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.147 [2024-12-06 13:36:58.730897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.147 [2024-12-06 13:36:58.730903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.147 [2024-12-06 13:36:58.731070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.147 [2024-12-06 13:36:58.731239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.147 [2024-12-06 13:36:58.731246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.147 [2024-12-06 13:36:58.731255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.147 [2024-12-06 13:36:58.731260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.147 [2024-12-06 13:36:58.743293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.147 [2024-12-06 13:36:58.743769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.147 [2024-12-06 13:36:58.743783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.147 [2024-12-06 13:36:58.743789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.147 [2024-12-06 13:36:58.743957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.147 [2024-12-06 13:36:58.744125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.147 [2024-12-06 13:36:58.744131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.147 [2024-12-06 13:36:58.744136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.147 [2024-12-06 13:36:58.744143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.147 [2024-12-06 13:36:58.756340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.147 [2024-12-06 13:36:58.757105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.147 [2024-12-06 13:36:58.757124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.148 [2024-12-06 13:36:58.757131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.148 [2024-12-06 13:36:58.757305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.148 [2024-12-06 13:36:58.757482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.148 [2024-12-06 13:36:58.757489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.148 [2024-12-06 13:36:58.757495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.148 [2024-12-06 13:36:58.757500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.148 [2024-12-06 13:36:58.769377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.148 [2024-12-06 13:36:58.769717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.148 [2024-12-06 13:36:58.769734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.148 [2024-12-06 13:36:58.769739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.148 [2024-12-06 13:36:58.769908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.148 [2024-12-06 13:36:58.770078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.148 [2024-12-06 13:36:58.770085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.148 [2024-12-06 13:36:58.770090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.148 [2024-12-06 13:36:58.770095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.148 [2024-12-06 13:36:58.782312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.148 [2024-12-06 13:36:58.782888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.148 [2024-12-06 13:36:58.782920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.148 [2024-12-06 13:36:58.782929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.148 [2024-12-06 13:36:58.783112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.148 [2024-12-06 13:36:58.783284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.148 [2024-12-06 13:36:58.783291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.148 [2024-12-06 13:36:58.783297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.148 [2024-12-06 13:36:58.783303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.148 [2024-12-06 13:36:58.795415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.148 [2024-12-06 13:36:58.795902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.148 [2024-12-06 13:36:58.795919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.148 [2024-12-06 13:36:58.795925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.148 [2024-12-06 13:36:58.796094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.148 [2024-12-06 13:36:58.796263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.148 [2024-12-06 13:36:58.796270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.148 [2024-12-06 13:36:58.796276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.148 [2024-12-06 13:36:58.796282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.409 [2024-12-06 13:36:58.808482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.409 [2024-12-06 13:36:58.808823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-12-06 13:36:58.808837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.409 [2024-12-06 13:36:58.808843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.409 [2024-12-06 13:36:58.809011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.409 [2024-12-06 13:36:58.809179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.409 [2024-12-06 13:36:58.809186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.409 [2024-12-06 13:36:58.809192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.409 [2024-12-06 13:36:58.809197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.409 [2024-12-06 13:36:58.821552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.409 [2024-12-06 13:36:58.822087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-12-06 13:36:58.822119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.409 [2024-12-06 13:36:58.822132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.409 [2024-12-06 13:36:58.822316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.409 [2024-12-06 13:36:58.822494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.409 [2024-12-06 13:36:58.822502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.409 [2024-12-06 13:36:58.822508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.409 [2024-12-06 13:36:58.822514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.409 [2024-12-06 13:36:58.834545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.409 [2024-12-06 13:36:58.835029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-12-06 13:36:58.835045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.409 [2024-12-06 13:36:58.835051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.409 [2024-12-06 13:36:58.835219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.409 [2024-12-06 13:36:58.835388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.409 [2024-12-06 13:36:58.835394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.409 [2024-12-06 13:36:58.835400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.409 [2024-12-06 13:36:58.835405] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.409 [2024-12-06 13:36:58.847595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.409 [2024-12-06 13:36:58.848207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-12-06 13:36:58.848238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.409 [2024-12-06 13:36:58.848247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.409 [2024-12-06 13:36:58.848431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.409 [2024-12-06 13:36:58.848619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.409 [2024-12-06 13:36:58.848628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.409 [2024-12-06 13:36:58.848633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.409 [2024-12-06 13:36:58.848639] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.409 [2024-12-06 13:36:58.860511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.409 [2024-12-06 13:36:58.861034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-12-06 13:36:58.861049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.409 [2024-12-06 13:36:58.861055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.409 [2024-12-06 13:36:58.861224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.409 [2024-12-06 13:36:58.861396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.409 [2024-12-06 13:36:58.861403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.409 [2024-12-06 13:36:58.861409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.409 [2024-12-06 13:36:58.861415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.409 [2024-12-06 13:36:58.873442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.409 [2024-12-06 13:36:58.873922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-12-06 13:36:58.873936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.409 [2024-12-06 13:36:58.873942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.409 [2024-12-06 13:36:58.874109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.409 [2024-12-06 13:36:58.874277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.409 [2024-12-06 13:36:58.874284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.410 [2024-12-06 13:36:58.874289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.410 [2024-12-06 13:36:58.874294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.410 [2024-12-06 13:36:58.886491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.410 [2024-12-06 13:36:58.886979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-12-06 13:36:58.886992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.410 [2024-12-06 13:36:58.886998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.410 [2024-12-06 13:36:58.887166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.410 [2024-12-06 13:36:58.887333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.410 [2024-12-06 13:36:58.887340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.410 [2024-12-06 13:36:58.887345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.410 [2024-12-06 13:36:58.887351] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.410 [2024-12-06 13:36:58.899533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.410 [2024-12-06 13:36:58.899999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-12-06 13:36:58.900012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.410 [2024-12-06 13:36:58.900018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.410 [2024-12-06 13:36:58.900186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.410 [2024-12-06 13:36:58.900354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.410 [2024-12-06 13:36:58.900360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.410 [2024-12-06 13:36:58.900369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.410 [2024-12-06 13:36:58.900374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.410 [2024-12-06 13:36:58.912565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.410 [2024-12-06 13:36:58.913029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-12-06 13:36:58.913042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.410 [2024-12-06 13:36:58.913047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.410 [2024-12-06 13:36:58.913215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.410 [2024-12-06 13:36:58.913383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.410 [2024-12-06 13:36:58.913390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.410 [2024-12-06 13:36:58.913395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.410 [2024-12-06 13:36:58.913400] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.410 [2024-12-06 13:36:58.925586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.410 [2024-12-06 13:36:58.926057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-12-06 13:36:58.926071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.410 [2024-12-06 13:36:58.926076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.410 [2024-12-06 13:36:58.926244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.410 [2024-12-06 13:36:58.926411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.410 [2024-12-06 13:36:58.926418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.410 [2024-12-06 13:36:58.926424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.410 [2024-12-06 13:36:58.926429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.410 [2024-12-06 13:36:58.938619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.410 [2024-12-06 13:36:58.939126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-12-06 13:36:58.939139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.410 [2024-12-06 13:36:58.939145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.410 [2024-12-06 13:36:58.939312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.410 [2024-12-06 13:36:58.939486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.410 [2024-12-06 13:36:58.939493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.410 [2024-12-06 13:36:58.939498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.410 [2024-12-06 13:36:58.939503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.410 [2024-12-06 13:36:58.951539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.410 [2024-12-06 13:36:58.952047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-12-06 13:36:58.952060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.410 [2024-12-06 13:36:58.952066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.410 [2024-12-06 13:36:58.952233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.410 [2024-12-06 13:36:58.952401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.410 [2024-12-06 13:36:58.952408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.410 [2024-12-06 13:36:58.952413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.410 [2024-12-06 13:36:58.952418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.410 [2024-12-06 13:36:58.964446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.410 [2024-12-06 13:36:58.964933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-12-06 13:36:58.964947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.410 [2024-12-06 13:36:58.964952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.410 [2024-12-06 13:36:58.965120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.410 [2024-12-06 13:36:58.965288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.410 [2024-12-06 13:36:58.965295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.410 [2024-12-06 13:36:58.965300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.410 [2024-12-06 13:36:58.965305] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.410 [2024-12-06 13:36:58.977505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.410 [2024-12-06 13:36:58.977940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-12-06 13:36:58.977954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.410 [2024-12-06 13:36:58.977960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.410 [2024-12-06 13:36:58.978127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.410 [2024-12-06 13:36:58.978296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.410 [2024-12-06 13:36:58.978303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.410 [2024-12-06 13:36:58.978308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.410 [2024-12-06 13:36:58.978313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.410 [2024-12-06 13:36:58.990507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.410 [2024-12-06 13:36:58.990982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-12-06 13:36:58.990994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.410 [2024-12-06 13:36:58.991003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.410 [2024-12-06 13:36:58.991170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.410 [2024-12-06 13:36:58.991338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.410 [2024-12-06 13:36:58.991345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.410 [2024-12-06 13:36:58.991350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.410 [2024-12-06 13:36:58.991356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.410 [2024-12-06 13:36:59.003549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.411 [2024-12-06 13:36:59.004055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-12-06 13:36:59.004068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.411 [2024-12-06 13:36:59.004073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.411 [2024-12-06 13:36:59.004240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.411 [2024-12-06 13:36:59.004408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.411 [2024-12-06 13:36:59.004414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.411 [2024-12-06 13:36:59.004420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.411 [2024-12-06 13:36:59.004425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.411 [2024-12-06 13:36:59.016616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.411 [2024-12-06 13:36:59.017123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-12-06 13:36:59.017136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.411 [2024-12-06 13:36:59.017141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.411 [2024-12-06 13:36:59.017308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.411 [2024-12-06 13:36:59.017483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.411 [2024-12-06 13:36:59.017491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.411 [2024-12-06 13:36:59.017497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.411 [2024-12-06 13:36:59.017502] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.411 [2024-12-06 13:36:59.029683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.411 [2024-12-06 13:36:59.030083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-12-06 13:36:59.030096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.411 [2024-12-06 13:36:59.030102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.411 [2024-12-06 13:36:59.030269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.411 [2024-12-06 13:36:59.030440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.411 [2024-12-06 13:36:59.030448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.411 [2024-12-06 13:36:59.030458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.411 [2024-12-06 13:36:59.030464] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.411 [2024-12-06 13:36:59.042650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.411 [2024-12-06 13:36:59.043147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-12-06 13:36:59.043160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.411 [2024-12-06 13:36:59.043165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.411 [2024-12-06 13:36:59.043333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.411 [2024-12-06 13:36:59.043506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.411 [2024-12-06 13:36:59.043513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.411 [2024-12-06 13:36:59.043518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.411 [2024-12-06 13:36:59.043523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.411 [2024-12-06 13:36:59.055714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.411 [2024-12-06 13:36:59.056185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-12-06 13:36:59.056198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.411 [2024-12-06 13:36:59.056204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.411 [2024-12-06 13:36:59.056373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.411 [2024-12-06 13:36:59.056547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.411 [2024-12-06 13:36:59.056555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.411 [2024-12-06 13:36:59.056560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.411 [2024-12-06 13:36:59.056566] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.673 [2024-12-06 13:36:59.068760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.673 [2024-12-06 13:36:59.069221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.673 [2024-12-06 13:36:59.069234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.673 [2024-12-06 13:36:59.069240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.673 [2024-12-06 13:36:59.069408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.673 [2024-12-06 13:36:59.069582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.673 [2024-12-06 13:36:59.069589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.673 [2024-12-06 13:36:59.069597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.673 [2024-12-06 13:36:59.069602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.673 [2024-12-06 13:36:59.081802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.673 [2024-12-06 13:36:59.082310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.673 [2024-12-06 13:36:59.082323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.673 [2024-12-06 13:36:59.082329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.673 [2024-12-06 13:36:59.082502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.673 [2024-12-06 13:36:59.082671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.673 [2024-12-06 13:36:59.082678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.673 [2024-12-06 13:36:59.082684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.673 [2024-12-06 13:36:59.082689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.673 [2024-12-06 13:36:59.094719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.673 [2024-12-06 13:36:59.095220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.673 [2024-12-06 13:36:59.095234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.673 [2024-12-06 13:36:59.095239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.673 [2024-12-06 13:36:59.095407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.673 [2024-12-06 13:36:59.095580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.673 [2024-12-06 13:36:59.095587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.673 [2024-12-06 13:36:59.095593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.673 [2024-12-06 13:36:59.095598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.673 [2024-12-06 13:36:59.107785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.673 [2024-12-06 13:36:59.108249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.673 [2024-12-06 13:36:59.108262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.673 [2024-12-06 13:36:59.108268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.673 [2024-12-06 13:36:59.108435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.673 [2024-12-06 13:36:59.108608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.673 [2024-12-06 13:36:59.108616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.673 [2024-12-06 13:36:59.108621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.673 [2024-12-06 13:36:59.108625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.673 [2024-12-06 13:36:59.120806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.673 [2024-12-06 13:36:59.121309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.673 [2024-12-06 13:36:59.121322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.673 [2024-12-06 13:36:59.121328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.673 [2024-12-06 13:36:59.121500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.673 [2024-12-06 13:36:59.121668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.673 [2024-12-06 13:36:59.121675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.673 [2024-12-06 13:36:59.121681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.673 [2024-12-06 13:36:59.121686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.673 [2024-12-06 13:36:59.133873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.673 [2024-12-06 13:36:59.134333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.673 [2024-12-06 13:36:59.134346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.673 [2024-12-06 13:36:59.134352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.673 [2024-12-06 13:36:59.134526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.673 [2024-12-06 13:36:59.134694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.673 [2024-12-06 13:36:59.134701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.673 [2024-12-06 13:36:59.134706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.673 [2024-12-06 13:36:59.134711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.673 [2024-12-06 13:36:59.146893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.673 [2024-12-06 13:36:59.147406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.673 [2024-12-06 13:36:59.147419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.673 [2024-12-06 13:36:59.147425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.673 [2024-12-06 13:36:59.147598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.673 [2024-12-06 13:36:59.147767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.673 [2024-12-06 13:36:59.147774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.673 [2024-12-06 13:36:59.147779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.673 [2024-12-06 13:36:59.147784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.673 [2024-12-06 13:36:59.159807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.673 [2024-12-06 13:36:59.160306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.673 [2024-12-06 13:36:59.160319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.673 [2024-12-06 13:36:59.160328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.673 [2024-12-06 13:36:59.160502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.673 [2024-12-06 13:36:59.160671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.673 [2024-12-06 13:36:59.160678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.673 [2024-12-06 13:36:59.160684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.673 [2024-12-06 13:36:59.160689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.673 [2024-12-06 13:36:59.172717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.673 [2024-12-06 13:36:59.173189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.673 [2024-12-06 13:36:59.173201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.673 [2024-12-06 13:36:59.173207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.673 [2024-12-06 13:36:59.173375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.673 [2024-12-06 13:36:59.173548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.673 [2024-12-06 13:36:59.173555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.673 [2024-12-06 13:36:59.173561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.673 [2024-12-06 13:36:59.173565] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.673 [2024-12-06 13:36:59.185873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.673 [2024-12-06 13:36:59.186377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.674 [2024-12-06 13:36:59.186391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.674 [2024-12-06 13:36:59.186397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.674 [2024-12-06 13:36:59.186571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.674 [2024-12-06 13:36:59.186740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.674 [2024-12-06 13:36:59.186747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.674 [2024-12-06 13:36:59.186752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.674 [2024-12-06 13:36:59.186757] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.674 [2024-12-06 13:36:59.198936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.674 [2024-12-06 13:36:59.199438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.674 [2024-12-06 13:36:59.199451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.674 [2024-12-06 13:36:59.199463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.674 [2024-12-06 13:36:59.199631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.674 [2024-12-06 13:36:59.199802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.674 [2024-12-06 13:36:59.199809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.674 [2024-12-06 13:36:59.199814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.674 [2024-12-06 13:36:59.199819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.674 [2024-12-06 13:36:59.212005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.674 [2024-12-06 13:36:59.212505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.674 [2024-12-06 13:36:59.212519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.674 [2024-12-06 13:36:59.212525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.674 [2024-12-06 13:36:59.212693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.674 [2024-12-06 13:36:59.212862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.674 [2024-12-06 13:36:59.212868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.674 [2024-12-06 13:36:59.212873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.674 [2024-12-06 13:36:59.212878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.674 [2024-12-06 13:36:59.225070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.674 [2024-12-06 13:36:59.225552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.674 [2024-12-06 13:36:59.225567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.674 [2024-12-06 13:36:59.225572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.674 [2024-12-06 13:36:59.225740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.674 [2024-12-06 13:36:59.225908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.674 [2024-12-06 13:36:59.225915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.674 [2024-12-06 13:36:59.225920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.674 [2024-12-06 13:36:59.225925] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.674 5717.00 IOPS, 22.33 MiB/s [2024-12-06T12:36:59.333Z] [2024-12-06 13:36:59.238118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.674 [2024-12-06 13:36:59.238726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.674 [2024-12-06 13:36:59.238758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.674 [2024-12-06 13:36:59.238767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.674 [2024-12-06 13:36:59.238950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.674 [2024-12-06 13:36:59.239122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.674 [2024-12-06 13:36:59.239130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.674 [2024-12-06 13:36:59.239138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.674 [2024-12-06 13:36:59.239145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.674 [2024-12-06 13:36:59.251177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.674 [2024-12-06 13:36:59.251756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.674 [2024-12-06 13:36:59.251788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.674 [2024-12-06 13:36:59.251797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.674 [2024-12-06 13:36:59.251981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.674 [2024-12-06 13:36:59.252153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.674 [2024-12-06 13:36:59.252160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.674 [2024-12-06 13:36:59.252165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.674 [2024-12-06 13:36:59.252171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.674 [2024-12-06 13:36:59.264193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.674 [2024-12-06 13:36:59.264806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.674 [2024-12-06 13:36:59.264838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.674 [2024-12-06 13:36:59.264847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.674 [2024-12-06 13:36:59.265030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.674 [2024-12-06 13:36:59.265202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.674 [2024-12-06 13:36:59.265209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.674 [2024-12-06 13:36:59.265215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.674 [2024-12-06 13:36:59.265220] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.674 [2024-12-06 13:36:59.277261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.674 [2024-12-06 13:36:59.277776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.674 [2024-12-06 13:36:59.277792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.674 [2024-12-06 13:36:59.277798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.674 [2024-12-06 13:36:59.277967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.674 [2024-12-06 13:36:59.278135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.674 [2024-12-06 13:36:59.278142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.674 [2024-12-06 13:36:59.278148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.674 [2024-12-06 13:36:59.278153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.674 [2024-12-06 13:36:59.290319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.674 [2024-12-06 13:36:59.290804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.674 [2024-12-06 13:36:59.290817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.674 [2024-12-06 13:36:59.290823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.674 [2024-12-06 13:36:59.290990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.674 [2024-12-06 13:36:59.291158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.674 [2024-12-06 13:36:59.291165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.674 [2024-12-06 13:36:59.291170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.674 [2024-12-06 13:36:59.291175] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.674 [2024-12-06 13:36:59.303340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.674 [2024-12-06 13:36:59.303899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.674 [2024-12-06 13:36:59.303930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.674 [2024-12-06 13:36:59.303939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.674 [2024-12-06 13:36:59.304123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.674 [2024-12-06 13:36:59.304294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.674 [2024-12-06 13:36:59.304302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.674 [2024-12-06 13:36:59.304307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.674 [2024-12-06 13:36:59.304313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.675 [2024-12-06 13:36:59.316337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.675 [2024-12-06 13:36:59.316925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.675 [2024-12-06 13:36:59.316956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.675 [2024-12-06 13:36:59.316965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.675 [2024-12-06 13:36:59.317149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.675 [2024-12-06 13:36:59.317320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.675 [2024-12-06 13:36:59.317328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.675 [2024-12-06 13:36:59.317334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.675 [2024-12-06 13:36:59.317340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.329367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.329975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.330010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.330019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.330202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.330374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.330382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.330388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.936 [2024-12-06 13:36:59.330394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.342414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.342900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.342916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.342922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.343090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.343259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.343266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.343271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.936 [2024-12-06 13:36:59.343277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.355457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.356051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.356082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.356091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.356275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.356446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.356461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.356468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.936 [2024-12-06 13:36:59.356473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.368485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.369058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.369090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.369099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.369282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.369468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.369476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.369483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.936 [2024-12-06 13:36:59.369489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.381512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.382022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.382038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.382044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.382212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.382380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.382387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.382392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.936 [2024-12-06 13:36:59.382398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.394581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.395097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.395111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.395116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.395284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.395453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.395492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.395498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.936 [2024-12-06 13:36:59.395503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.407510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.408011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.408025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.408031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.408198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.408367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.408373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.408382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.936 [2024-12-06 13:36:59.408387] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.420556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.421073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.421087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.421093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.421260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.421429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.421436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.421442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.936 [2024-12-06 13:36:59.421447] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.433471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.433955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.433968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.433973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.434141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.434309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.434316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.434322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.936 [2024-12-06 13:36:59.434327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.446507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.447014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.447028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.447033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.447201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.447368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.447375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.447380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.936 [2024-12-06 13:36:59.447385] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.459572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.459960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.459974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.459979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.460147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.460315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.460322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.460328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.936 [2024-12-06 13:36:59.460333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.472511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.472965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.472978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.472984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.473152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.473319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.473327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.473332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.936 [2024-12-06 13:36:59.473337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.485518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.486020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.486033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.486039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.486206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.486374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.486381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.486386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.936 [2024-12-06 13:36:59.486391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.936 [2024-12-06 13:36:59.498558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.936 [2024-12-06 13:36:59.499016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.936 [2024-12-06 13:36:59.499029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.936 [2024-12-06 13:36:59.499037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.936 [2024-12-06 13:36:59.499205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.936 [2024-12-06 13:36:59.499373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.936 [2024-12-06 13:36:59.499380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.936 [2024-12-06 13:36:59.499386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.937 [2024-12-06 13:36:59.499390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.937 [2024-12-06 13:36:59.511560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.937 [2024-12-06 13:36:59.512062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.937 [2024-12-06 13:36:59.512075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.937 [2024-12-06 13:36:59.512080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.937 [2024-12-06 13:36:59.512247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.937 [2024-12-06 13:36:59.512415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.937 [2024-12-06 13:36:59.512422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.937 [2024-12-06 13:36:59.512428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.937 [2024-12-06 13:36:59.512432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.937 [2024-12-06 13:36:59.524602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.937 [2024-12-06 13:36:59.525109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.937 [2024-12-06 13:36:59.525122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.937 [2024-12-06 13:36:59.525127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.937 [2024-12-06 13:36:59.525295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.937 [2024-12-06 13:36:59.525469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.937 [2024-12-06 13:36:59.525476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.937 [2024-12-06 13:36:59.525481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.937 [2024-12-06 13:36:59.525486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.937 [2024-12-06 13:36:59.537652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.937 [2024-12-06 13:36:59.538112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.937 [2024-12-06 13:36:59.538124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.937 [2024-12-06 13:36:59.538130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.937 [2024-12-06 13:36:59.538297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.937 [2024-12-06 13:36:59.538473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.937 [2024-12-06 13:36:59.538480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.937 [2024-12-06 13:36:59.538485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.937 [2024-12-06 13:36:59.538490] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.937 [2024-12-06 13:36:59.550662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.937 [2024-12-06 13:36:59.551131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.937 [2024-12-06 13:36:59.551144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.937 [2024-12-06 13:36:59.551149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.937 [2024-12-06 13:36:59.551317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.937 [2024-12-06 13:36:59.551490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.937 [2024-12-06 13:36:59.551498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.937 [2024-12-06 13:36:59.551503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.937 [2024-12-06 13:36:59.551508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.937 [2024-12-06 13:36:59.563673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.937 [2024-12-06 13:36:59.564185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.937 [2024-12-06 13:36:59.564198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.937 [2024-12-06 13:36:59.564204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.937 [2024-12-06 13:36:59.564371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.937 [2024-12-06 13:36:59.564544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.937 [2024-12-06 13:36:59.564551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.937 [2024-12-06 13:36:59.564556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.937 [2024-12-06 13:36:59.564561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.937 [2024-12-06 13:36:59.576721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.937 [2024-12-06 13:36:59.577176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.937 [2024-12-06 13:36:59.577189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.937 [2024-12-06 13:36:59.577194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.937 [2024-12-06 13:36:59.577362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.937 [2024-12-06 13:36:59.577541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.937 [2024-12-06 13:36:59.577549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.937 [2024-12-06 13:36:59.577558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.937 [2024-12-06 13:36:59.577564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:12.937 [2024-12-06 13:36:59.589742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:12.937 [2024-12-06 13:36:59.590201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.937 [2024-12-06 13:36:59.590214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:12.937 [2024-12-06 13:36:59.590220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:12.937 [2024-12-06 13:36:59.590387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:12.937 [2024-12-06 13:36:59.590560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:12.937 [2024-12-06 13:36:59.590568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:12.937 [2024-12-06 13:36:59.590573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:12.937 [2024-12-06 13:36:59.590578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.198 [2024-12-06 13:36:59.602746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.198 [2024-12-06 13:36:59.603246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-12-06 13:36:59.603258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.198 [2024-12-06 13:36:59.603264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.198 [2024-12-06 13:36:59.603431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.198 [2024-12-06 13:36:59.603604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.198 [2024-12-06 13:36:59.603612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.198 [2024-12-06 13:36:59.603617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.198 [2024-12-06 13:36:59.603622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.198 [2024-12-06 13:36:59.615785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.198 [2024-12-06 13:36:59.616291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-12-06 13:36:59.616304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.198 [2024-12-06 13:36:59.616310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.198 [2024-12-06 13:36:59.616482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.198 [2024-12-06 13:36:59.616651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.198 [2024-12-06 13:36:59.616657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.198 [2024-12-06 13:36:59.616663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.198 [2024-12-06 13:36:59.616668] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.198 [2024-12-06 13:36:59.628838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.198 [2024-12-06 13:36:59.629345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.198 [2024-12-06 13:36:59.629358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.199 [2024-12-06 13:36:59.629364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.199 [2024-12-06 13:36:59.629536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.199 [2024-12-06 13:36:59.629705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.199 [2024-12-06 13:36:59.629711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.199 [2024-12-06 13:36:59.629718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.199 [2024-12-06 13:36:59.629723] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.199 [2024-12-06 13:36:59.641886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.199 [2024-12-06 13:36:59.642389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-12-06 13:36:59.642402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.199 [2024-12-06 13:36:59.642408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.199 [2024-12-06 13:36:59.642581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.199 [2024-12-06 13:36:59.642750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.199 [2024-12-06 13:36:59.642757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.199 [2024-12-06 13:36:59.642763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.199 [2024-12-06 13:36:59.642767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.199 [2024-12-06 13:36:59.654811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.199 [2024-12-06 13:36:59.655314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-12-06 13:36:59.655328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.199 [2024-12-06 13:36:59.655333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.199 [2024-12-06 13:36:59.655506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.199 [2024-12-06 13:36:59.655675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.199 [2024-12-06 13:36:59.655682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.199 [2024-12-06 13:36:59.655687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.199 [2024-12-06 13:36:59.655693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.199 [2024-12-06 13:36:59.667856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.199 [2024-12-06 13:36:59.668358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-12-06 13:36:59.668371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.199 [2024-12-06 13:36:59.668380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.199 [2024-12-06 13:36:59.668554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.199 [2024-12-06 13:36:59.668723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.199 [2024-12-06 13:36:59.668731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.199 [2024-12-06 13:36:59.668736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.199 [2024-12-06 13:36:59.668741] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.199 [2024-12-06 13:36:59.680903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.199 [2024-12-06 13:36:59.681400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-12-06 13:36:59.681413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.199 [2024-12-06 13:36:59.681419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.199 [2024-12-06 13:36:59.681590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.199 [2024-12-06 13:36:59.681759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.199 [2024-12-06 13:36:59.681766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.199 [2024-12-06 13:36:59.681771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.199 [2024-12-06 13:36:59.681775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.199 [2024-12-06 13:36:59.693932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.199 [2024-12-06 13:36:59.694389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-12-06 13:36:59.694402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.199 [2024-12-06 13:36:59.694408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.199 [2024-12-06 13:36:59.694584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.199 [2024-12-06 13:36:59.694754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.199 [2024-12-06 13:36:59.694761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.199 [2024-12-06 13:36:59.694766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.199 [2024-12-06 13:36:59.694771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.199 [2024-12-06 13:36:59.706935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.199 [2024-12-06 13:36:59.707403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-12-06 13:36:59.707416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.199 [2024-12-06 13:36:59.707422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.199 [2024-12-06 13:36:59.707595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.199 [2024-12-06 13:36:59.707767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.199 [2024-12-06 13:36:59.707774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.199 [2024-12-06 13:36:59.707779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.199 [2024-12-06 13:36:59.707784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.199 [2024-12-06 13:36:59.719944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.199 [2024-12-06 13:36:59.720401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-12-06 13:36:59.720414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.199 [2024-12-06 13:36:59.720419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.199 [2024-12-06 13:36:59.720591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.199 [2024-12-06 13:36:59.720760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.199 [2024-12-06 13:36:59.720766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.199 [2024-12-06 13:36:59.720772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.199 [2024-12-06 13:36:59.720778] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2331406 Killed "${NVMF_APP[@]}" "$@" 00:29:13.199 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:13.199 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:13.199 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.199 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.199 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.199 [2024-12-06 13:36:59.732939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.199 [2024-12-06 13:36:59.733441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.199 [2024-12-06 13:36:59.733457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.199 [2024-12-06 13:36:59.733463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.199 [2024-12-06 13:36:59.733630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.199 [2024-12-06 13:36:59.733798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.199 [2024-12-06 13:36:59.733805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.199 [2024-12-06 13:36:59.733811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.199 [2024-12-06 13:36:59.733815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.199 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2333106 00:29:13.199 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2333106 00:29:13.199 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:13.199 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2333106 ']' 00:29:13.199 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.199 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.199 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.200 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.200 13:36:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.200 [2024-12-06 13:36:59.745987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.200 [2024-12-06 13:36:59.746372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-12-06 13:36:59.746386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.200 [2024-12-06 13:36:59.746392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.200 [2024-12-06 13:36:59.746565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.200 [2024-12-06 13:36:59.746734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.200 [2024-12-06 13:36:59.746741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.200 [2024-12-06 13:36:59.746746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.200 [2024-12-06 13:36:59.746751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.200 [2024-12-06 13:36:59.758925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.200 [2024-12-06 13:36:59.759425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-12-06 13:36:59.759438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.200 [2024-12-06 13:36:59.759444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.200 [2024-12-06 13:36:59.759614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.200 [2024-12-06 13:36:59.759783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.200 [2024-12-06 13:36:59.759789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.200 [2024-12-06 13:36:59.759795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.200 [2024-12-06 13:36:59.759801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.200 [2024-12-06 13:36:59.771966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.200 [2024-12-06 13:36:59.772431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-12-06 13:36:59.772444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.200 [2024-12-06 13:36:59.772449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.200 [2024-12-06 13:36:59.772620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.200 [2024-12-06 13:36:59.772788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.200 [2024-12-06 13:36:59.772798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.200 [2024-12-06 13:36:59.772803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.200 [2024-12-06 13:36:59.772808] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.200 [2024-12-06 13:36:59.784988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.200 [2024-12-06 13:36:59.785501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-12-06 13:36:59.785516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.200 [2024-12-06 13:36:59.785521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.200 [2024-12-06 13:36:59.785690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.200 [2024-12-06 13:36:59.785858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.200 [2024-12-06 13:36:59.785864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.200 [2024-12-06 13:36:59.785869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.200 [2024-12-06 13:36:59.785875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.200 [2024-12-06 13:36:59.789026] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:29:13.200 [2024-12-06 13:36:59.789073] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.200 [2024-12-06 13:36:59.798039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.200 [2024-12-06 13:36:59.798547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-12-06 13:36:59.798560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.200 [2024-12-06 13:36:59.798566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.200 [2024-12-06 13:36:59.798734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.200 [2024-12-06 13:36:59.798902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.200 [2024-12-06 13:36:59.798908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.200 [2024-12-06 13:36:59.798913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.200 [2024-12-06 13:36:59.798918] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.200 [2024-12-06 13:36:59.811084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.200 [2024-12-06 13:36:59.811575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-12-06 13:36:59.811607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.200 [2024-12-06 13:36:59.811616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.200 [2024-12-06 13:36:59.811802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.200 [2024-12-06 13:36:59.811976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.200 [2024-12-06 13:36:59.811984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.200 [2024-12-06 13:36:59.811990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.200 [2024-12-06 13:36:59.811996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.200 [2024-12-06 13:36:59.824089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.200 [2024-12-06 13:36:59.824795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-12-06 13:36:59.824827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.200 [2024-12-06 13:36:59.824836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.200 [2024-12-06 13:36:59.825021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.200 [2024-12-06 13:36:59.825193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.200 [2024-12-06 13:36:59.825200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.200 [2024-12-06 13:36:59.825206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.200 [2024-12-06 13:36:59.825212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.200 [2024-12-06 13:36:59.837071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.200 [2024-12-06 13:36:59.837583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-12-06 13:36:59.837615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.200 [2024-12-06 13:36:59.837624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.200 [2024-12-06 13:36:59.837810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.200 [2024-12-06 13:36:59.837982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.200 [2024-12-06 13:36:59.837990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.200 [2024-12-06 13:36:59.837996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.200 [2024-12-06 13:36:59.838002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.200 [2024-12-06 13:36:59.850031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.200 [2024-12-06 13:36:59.850520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-12-06 13:36:59.850550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.200 [2024-12-06 13:36:59.850560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.200 [2024-12-06 13:36:59.850746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.200 [2024-12-06 13:36:59.850917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.200 [2024-12-06 13:36:59.850924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.200 [2024-12-06 13:36:59.850934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.200 [2024-12-06 13:36:59.850940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.463 [2024-12-06 13:36:59.862974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.463 [2024-12-06 13:36:59.863588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-12-06 13:36:59.863620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.463 [2024-12-06 13:36:59.863629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.463 [2024-12-06 13:36:59.863815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.463 [2024-12-06 13:36:59.863987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.463 [2024-12-06 13:36:59.863994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.463 [2024-12-06 13:36:59.864000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.463 [2024-12-06 13:36:59.864006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.463 [2024-12-06 13:36:59.876038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.463 [2024-12-06 13:36:59.876585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-12-06 13:36:59.876616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.463 [2024-12-06 13:36:59.876625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.463 [2024-12-06 13:36:59.876812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.463 [2024-12-06 13:36:59.876983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.463 [2024-12-06 13:36:59.876990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.463 [2024-12-06 13:36:59.876996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.463 [2024-12-06 13:36:59.877002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.463 [2024-12-06 13:36:59.879568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:13.463 [2024-12-06 13:36:59.889043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.463 [2024-12-06 13:36:59.889452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-12-06 13:36:59.889491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.463 [2024-12-06 13:36:59.889499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.463 [2024-12-06 13:36:59.889684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.463 [2024-12-06 13:36:59.889856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.463 [2024-12-06 13:36:59.889863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.463 [2024-12-06 13:36:59.889869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.463 [2024-12-06 13:36:59.889875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.463 [2024-12-06 13:36:59.902065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.463 [2024-12-06 13:36:59.902477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.463 [2024-12-06 13:36:59.902496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.463 [2024-12-06 13:36:59.902502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.463 [2024-12-06 13:36:59.902672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.463 [2024-12-06 13:36:59.902842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.463 [2024-12-06 13:36:59.902849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.464 [2024-12-06 13:36:59.902855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.464 [2024-12-06 13:36:59.902860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.464 [2024-12-06 13:36:59.908744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.464 [2024-12-06 13:36:59.908765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.464 [2024-12-06 13:36:59.908771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.464 [2024-12-06 13:36:59.908777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.464 [2024-12-06 13:36:59.908782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.464 [2024-12-06 13:36:59.909872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.464 [2024-12-06 13:36:59.910023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.464 [2024-12-06 13:36:59.910025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:13.464 [2024-12-06 13:36:59.915042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.464 [2024-12-06 13:36:59.915557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-12-06 13:36:59.915590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.464 [2024-12-06 13:36:59.915599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.464 [2024-12-06 13:36:59.915785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.464 [2024-12-06 13:36:59.915957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.464 [2024-12-06 13:36:59.915964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.464 [2024-12-06 13:36:59.915970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.464 [2024-12-06 13:36:59.915978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.464 [2024-12-06 13:36:59.928013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.464 [2024-12-06 13:36:59.928668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-12-06 13:36:59.928701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.464 [2024-12-06 13:36:59.928711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.464 [2024-12-06 13:36:59.928895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.464 [2024-12-06 13:36:59.929072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.464 [2024-12-06 13:36:59.929080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.464 [2024-12-06 13:36:59.929085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.464 [2024-12-06 13:36:59.929092] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.464 [2024-12-06 13:36:59.940959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.464 [2024-12-06 13:36:59.941568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-12-06 13:36:59.941600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.464 [2024-12-06 13:36:59.941609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.464 [2024-12-06 13:36:59.941793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.464 [2024-12-06 13:36:59.941965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.464 [2024-12-06 13:36:59.941972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.464 [2024-12-06 13:36:59.941978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.464 [2024-12-06 13:36:59.941984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.464 [2024-12-06 13:36:59.954027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.464 [2024-12-06 13:36:59.954668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-12-06 13:36:59.954700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.464 [2024-12-06 13:36:59.954709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.464 [2024-12-06 13:36:59.954893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.464 [2024-12-06 13:36:59.955064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.464 [2024-12-06 13:36:59.955072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.464 [2024-12-06 13:36:59.955077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.464 [2024-12-06 13:36:59.955083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.464 [2024-12-06 13:36:59.966951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.464 [2024-12-06 13:36:59.967398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-12-06 13:36:59.967414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.464 [2024-12-06 13:36:59.967420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.464 [2024-12-06 13:36:59.967593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.464 [2024-12-06 13:36:59.967762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.464 [2024-12-06 13:36:59.967770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.464 [2024-12-06 13:36:59.967779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.464 [2024-12-06 13:36:59.967784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.464 [2024-12-06 13:36:59.979999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.464 [2024-12-06 13:36:59.980564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-12-06 13:36:59.980595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.464 [2024-12-06 13:36:59.980605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.464 [2024-12-06 13:36:59.980790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.464 [2024-12-06 13:36:59.980962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.464 [2024-12-06 13:36:59.980970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.464 [2024-12-06 13:36:59.980976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.464 [2024-12-06 13:36:59.980982] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.464 [2024-12-06 13:36:59.993008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.464 [2024-12-06 13:36:59.993572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-12-06 13:36:59.993604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.464 [2024-12-06 13:36:59.993613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.464 [2024-12-06 13:36:59.993799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.464 [2024-12-06 13:36:59.993970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.464 [2024-12-06 13:36:59.993978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.464 [2024-12-06 13:36:59.993984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.464 [2024-12-06 13:36:59.993989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.464 [2024-12-06 13:37:00.006026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.464 [2024-12-06 13:37:00.006301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-12-06 13:37:00.006317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.464 [2024-12-06 13:37:00.006324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.464 [2024-12-06 13:37:00.006500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.464 [2024-12-06 13:37:00.006671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.464 [2024-12-06 13:37:00.006677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.464 [2024-12-06 13:37:00.006683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.464 [2024-12-06 13:37:00.006688] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.464 [2024-12-06 13:37:00.019083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.464 [2024-12-06 13:37:00.019452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.464 [2024-12-06 13:37:00.019471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.464 [2024-12-06 13:37:00.019476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.464 [2024-12-06 13:37:00.019644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.464 [2024-12-06 13:37:00.019813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.464 [2024-12-06 13:37:00.019819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.465 [2024-12-06 13:37:00.019824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.465 [2024-12-06 13:37:00.019830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.465 [2024-12-06 13:37:00.032039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.465 [2024-12-06 13:37:00.032592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-12-06 13:37:00.032624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.465 [2024-12-06 13:37:00.032633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.465 [2024-12-06 13:37:00.032819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.465 [2024-12-06 13:37:00.032991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.465 [2024-12-06 13:37:00.032998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.465 [2024-12-06 13:37:00.033004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.465 [2024-12-06 13:37:00.033010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.465 [2024-12-06 13:37:00.045047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.465 [2024-12-06 13:37:00.045601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-12-06 13:37:00.045633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.465 [2024-12-06 13:37:00.045642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.465 [2024-12-06 13:37:00.045828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.465 [2024-12-06 13:37:00.045999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.465 [2024-12-06 13:37:00.046006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.465 [2024-12-06 13:37:00.046013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.465 [2024-12-06 13:37:00.046019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.465 [2024-12-06 13:37:00.058050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.465 [2024-12-06 13:37:00.058548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-12-06 13:37:00.058580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.465 [2024-12-06 13:37:00.058593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.465 [2024-12-06 13:37:00.058778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.465 [2024-12-06 13:37:00.058950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.465 [2024-12-06 13:37:00.058957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.465 [2024-12-06 13:37:00.058963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.465 [2024-12-06 13:37:00.058969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.465 [2024-12-06 13:37:00.071043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.465 [2024-12-06 13:37:00.071512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-12-06 13:37:00.071544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.465 [2024-12-06 13:37:00.071554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.465 [2024-12-06 13:37:00.071740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.465 [2024-12-06 13:37:00.071912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.465 [2024-12-06 13:37:00.071920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.465 [2024-12-06 13:37:00.071926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.465 [2024-12-06 13:37:00.071932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.465 [2024-12-06 13:37:00.083964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.465 [2024-12-06 13:37:00.084433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-12-06 13:37:00.084470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.465 [2024-12-06 13:37:00.084480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.465 [2024-12-06 13:37:00.084663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.465 [2024-12-06 13:37:00.084835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.465 [2024-12-06 13:37:00.084842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.465 [2024-12-06 13:37:00.084848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.465 [2024-12-06 13:37:00.084853] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.465 [2024-12-06 13:37:00.096885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.465 [2024-12-06 13:37:00.097383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-12-06 13:37:00.097414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.465 [2024-12-06 13:37:00.097423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.465 [2024-12-06 13:37:00.097617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.465 [2024-12-06 13:37:00.097794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.465 [2024-12-06 13:37:00.097802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.465 [2024-12-06 13:37:00.097809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.465 [2024-12-06 13:37:00.097815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.465 [2024-12-06 13:37:00.109884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.465 [2024-12-06 13:37:00.110387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.465 [2024-12-06 13:37:00.110403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.465 [2024-12-06 13:37:00.110410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.465 [2024-12-06 13:37:00.110582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.465 [2024-12-06 13:37:00.110751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.465 [2024-12-06 13:37:00.110758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.465 [2024-12-06 13:37:00.110763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.465 [2024-12-06 13:37:00.110769] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.728 [2024-12-06 13:37:00.122945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.728 [2024-12-06 13:37:00.123382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.728 [2024-12-06 13:37:00.123395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.728 [2024-12-06 13:37:00.123402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.728 [2024-12-06 13:37:00.123575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.728 [2024-12-06 13:37:00.123744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.728 [2024-12-06 13:37:00.123750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.728 [2024-12-06 13:37:00.123755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.728 [2024-12-06 13:37:00.123761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.728 [2024-12-06 13:37:00.135932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.728 [2024-12-06 13:37:00.136376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.728 [2024-12-06 13:37:00.136390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.728 [2024-12-06 13:37:00.136396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.728 [2024-12-06 13:37:00.136568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.728 [2024-12-06 13:37:00.136738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.728 [2024-12-06 13:37:00.136744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.728 [2024-12-06 13:37:00.136753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.728 [2024-12-06 13:37:00.136759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.728 [2024-12-06 13:37:00.148937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.728 [2024-12-06 13:37:00.149480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.728 [2024-12-06 13:37:00.149513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.728 [2024-12-06 13:37:00.149522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.728 [2024-12-06 13:37:00.149706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.728 [2024-12-06 13:37:00.149878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.728 [2024-12-06 13:37:00.149886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.728 [2024-12-06 13:37:00.149892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.728 [2024-12-06 13:37:00.149898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.728 [2024-12-06 13:37:00.161944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.728 [2024-12-06 13:37:00.162550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.728 [2024-12-06 13:37:00.162582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.728 [2024-12-06 13:37:00.162591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.728 [2024-12-06 13:37:00.162777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.728 [2024-12-06 13:37:00.162948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.728 [2024-12-06 13:37:00.162956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.728 [2024-12-06 13:37:00.162961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.728 [2024-12-06 13:37:00.162968] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.728 [2024-12-06 13:37:00.174993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.728 [2024-12-06 13:37:00.175505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.728 [2024-12-06 13:37:00.175527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.728 [2024-12-06 13:37:00.175533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.728 [2024-12-06 13:37:00.175707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.728 [2024-12-06 13:37:00.175876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.728 [2024-12-06 13:37:00.175885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.728 [2024-12-06 13:37:00.175891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.728 [2024-12-06 13:37:00.175897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.728 [2024-12-06 13:37:00.188103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.728 [2024-12-06 13:37:00.188715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.728 [2024-12-06 13:37:00.188747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.728 [2024-12-06 13:37:00.188756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.728 [2024-12-06 13:37:00.188940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.728 [2024-12-06 13:37:00.189111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.728 [2024-12-06 13:37:00.189119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.728 [2024-12-06 13:37:00.189124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.728 [2024-12-06 13:37:00.189130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.728 [2024-12-06 13:37:00.201150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.728 [2024-12-06 13:37:00.201788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.728 [2024-12-06 13:37:00.201820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.728 [2024-12-06 13:37:00.201829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.728 [2024-12-06 13:37:00.202013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.728 [2024-12-06 13:37:00.202185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.728 [2024-12-06 13:37:00.202192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.728 [2024-12-06 13:37:00.202198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.728 [2024-12-06 13:37:00.202204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.728 [2024-12-06 13:37:00.214110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.728 [2024-12-06 13:37:00.214575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.728 [2024-12-06 13:37:00.214606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.728 [2024-12-06 13:37:00.214615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.729 [2024-12-06 13:37:00.214801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.729 [2024-12-06 13:37:00.214973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.729 [2024-12-06 13:37:00.214980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.729 [2024-12-06 13:37:00.214987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.729 [2024-12-06 13:37:00.214992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.729 [2024-12-06 13:37:00.227181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.729 [2024-12-06 13:37:00.227659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.729 [2024-12-06 13:37:00.227675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.729 [2024-12-06 13:37:00.227685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.729 [2024-12-06 13:37:00.227854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.729 [2024-12-06 13:37:00.228023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.729 [2024-12-06 13:37:00.228030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.729 [2024-12-06 13:37:00.228035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.729 [2024-12-06 13:37:00.228040] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.729 4764.17 IOPS, 18.61 MiB/s [2024-12-06T12:37:00.388Z] [2024-12-06 13:37:00.240241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.729 [2024-12-06 13:37:00.240796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.729 [2024-12-06 13:37:00.240828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.729 [2024-12-06 13:37:00.240837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.729 [2024-12-06 13:37:00.241021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.729 [2024-12-06 13:37:00.241193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.729 [2024-12-06 13:37:00.241201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.729 [2024-12-06 13:37:00.241206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.729 [2024-12-06 13:37:00.241212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.729 [2024-12-06 13:37:00.253261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.729 [2024-12-06 13:37:00.253821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.729 [2024-12-06 13:37:00.253853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.729 [2024-12-06 13:37:00.253862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.729 [2024-12-06 13:37:00.254046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.729 [2024-12-06 13:37:00.254217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.729 [2024-12-06 13:37:00.254225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.729 [2024-12-06 13:37:00.254230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.729 [2024-12-06 13:37:00.254236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.729 [2024-12-06 13:37:00.266265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.729 [2024-12-06 13:37:00.266838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.729 [2024-12-06 13:37:00.266870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.729 [2024-12-06 13:37:00.266879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.729 [2024-12-06 13:37:00.267063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.729 [2024-12-06 13:37:00.267238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.729 [2024-12-06 13:37:00.267246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.729 [2024-12-06 13:37:00.267252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.729 [2024-12-06 13:37:00.267257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.729 [2024-12-06 13:37:00.279285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.729 [2024-12-06 13:37:00.279817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.729 [2024-12-06 13:37:00.279832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.729 [2024-12-06 13:37:00.279839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.729 [2024-12-06 13:37:00.280007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.729 [2024-12-06 13:37:00.280175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.729 [2024-12-06 13:37:00.280182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.729 [2024-12-06 13:37:00.280187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.729 [2024-12-06 13:37:00.280192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.729 [2024-12-06 13:37:00.292267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.729 [2024-12-06 13:37:00.292873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.729 [2024-12-06 13:37:00.292904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.729 [2024-12-06 13:37:00.292913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.729 [2024-12-06 13:37:00.293098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.729 [2024-12-06 13:37:00.293269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.729 [2024-12-06 13:37:00.293277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.729 [2024-12-06 13:37:00.293283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.729 [2024-12-06 13:37:00.293289] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.729 [2024-12-06 13:37:00.305327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.729 [2024-12-06 13:37:00.305819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.729 [2024-12-06 13:37:00.305835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.729 [2024-12-06 13:37:00.305842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.729 [2024-12-06 13:37:00.306010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.729 [2024-12-06 13:37:00.306178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.729 [2024-12-06 13:37:00.306185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.729 [2024-12-06 13:37:00.306194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.729 [2024-12-06 13:37:00.306200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.729 [2024-12-06 13:37:00.318376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.729 [2024-12-06 13:37:00.318895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.729 [2024-12-06 13:37:00.318926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.729 [2024-12-06 13:37:00.318935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.729 [2024-12-06 13:37:00.319120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.729 [2024-12-06 13:37:00.319292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.729 [2024-12-06 13:37:00.319299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.729 [2024-12-06 13:37:00.319306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.729 [2024-12-06 13:37:00.319312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.729 [2024-12-06 13:37:00.331345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.729 [2024-12-06 13:37:00.331738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.729 [2024-12-06 13:37:00.331754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.729 [2024-12-06 13:37:00.331760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.729 [2024-12-06 13:37:00.331929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.729 [2024-12-06 13:37:00.332098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.729 [2024-12-06 13:37:00.332105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.729 [2024-12-06 13:37:00.332110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.729 [2024-12-06 13:37:00.332115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.729 [2024-12-06 13:37:00.344296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.729 [2024-12-06 13:37:00.344769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.729 [2024-12-06 13:37:00.344783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.730 [2024-12-06 13:37:00.344789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.730 [2024-12-06 13:37:00.344957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.730 [2024-12-06 13:37:00.345125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.730 [2024-12-06 13:37:00.345133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.730 [2024-12-06 13:37:00.345138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.730 [2024-12-06 13:37:00.345143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.730 [2024-12-06 13:37:00.357336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.730 [2024-12-06 13:37:00.357774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.730 [2024-12-06 13:37:00.357788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.730 [2024-12-06 13:37:00.357794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.730 [2024-12-06 13:37:00.357961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.730 [2024-12-06 13:37:00.358130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.730 [2024-12-06 13:37:00.358137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.730 [2024-12-06 13:37:00.358144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.730 [2024-12-06 13:37:00.358150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.730 [2024-12-06 13:37:00.370340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.730 [2024-12-06 13:37:00.370943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.730 [2024-12-06 13:37:00.370975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.730 [2024-12-06 13:37:00.370984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.730 [2024-12-06 13:37:00.371168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.730 [2024-12-06 13:37:00.371339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.730 [2024-12-06 13:37:00.371347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.730 [2024-12-06 13:37:00.371353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.730 [2024-12-06 13:37:00.371359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.730 [2024-12-06 13:37:00.383395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.993 [2024-12-06 13:37:00.383963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.993 [2024-12-06 13:37:00.383996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.993 [2024-12-06 13:37:00.384005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.993 [2024-12-06 13:37:00.384189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.993 [2024-12-06 13:37:00.384360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.993 [2024-12-06 13:37:00.384368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.993 [2024-12-06 13:37:00.384374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.993 [2024-12-06 13:37:00.384380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.993 [2024-12-06 13:37:00.396409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.993 [2024-12-06 13:37:00.396905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.993 [2024-12-06 13:37:00.396922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.993 [2024-12-06 13:37:00.396932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.994 [2024-12-06 13:37:00.397101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.994 [2024-12-06 13:37:00.397269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.994 [2024-12-06 13:37:00.397276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.994 [2024-12-06 13:37:00.397282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.994 [2024-12-06 13:37:00.397287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.994 [2024-12-06 13:37:00.409463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.994 [2024-12-06 13:37:00.409898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.994 [2024-12-06 13:37:00.409913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.994 [2024-12-06 13:37:00.409919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.994 [2024-12-06 13:37:00.410087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.994 [2024-12-06 13:37:00.410256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.994 [2024-12-06 13:37:00.410263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.994 [2024-12-06 13:37:00.410268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.994 [2024-12-06 13:37:00.410274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.994 [2024-12-06 13:37:00.422448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.994 [2024-12-06 13:37:00.422931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.994 [2024-12-06 13:37:00.422945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.994 [2024-12-06 13:37:00.422950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.994 [2024-12-06 13:37:00.423118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.994 [2024-12-06 13:37:00.423286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.994 [2024-12-06 13:37:00.423293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.994 [2024-12-06 13:37:00.423299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.994 [2024-12-06 13:37:00.423305] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.994 [2024-12-06 13:37:00.435485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.994 [2024-12-06 13:37:00.435951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.994 [2024-12-06 13:37:00.435965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.994 [2024-12-06 13:37:00.435971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.994 [2024-12-06 13:37:00.436139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.994 [2024-12-06 13:37:00.436311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.994 [2024-12-06 13:37:00.436318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.994 [2024-12-06 13:37:00.436323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.994 [2024-12-06 13:37:00.436329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.994 [2024-12-06 13:37:00.448515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.994 [2024-12-06 13:37:00.448952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.994 [2024-12-06 13:37:00.448965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.994 [2024-12-06 13:37:00.448970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.994 [2024-12-06 13:37:00.449138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.994 [2024-12-06 13:37:00.449307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.994 [2024-12-06 13:37:00.449313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.994 [2024-12-06 13:37:00.449319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.994 [2024-12-06 13:37:00.449324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.994 [2024-12-06 13:37:00.461515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.994 [2024-12-06 13:37:00.461954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.994 [2024-12-06 13:37:00.461968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.994 [2024-12-06 13:37:00.461974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.994 [2024-12-06 13:37:00.462142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.994 [2024-12-06 13:37:00.462310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.994 [2024-12-06 13:37:00.462317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.994 [2024-12-06 13:37:00.462322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.994 [2024-12-06 13:37:00.462327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.994 [2024-12-06 13:37:00.474511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.994 [2024-12-06 13:37:00.474987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.994 [2024-12-06 13:37:00.475000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.994 [2024-12-06 13:37:00.475006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.994 [2024-12-06 13:37:00.475173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.994 [2024-12-06 13:37:00.475341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.994 [2024-12-06 13:37:00.475349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.994 [2024-12-06 13:37:00.475358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.994 [2024-12-06 13:37:00.475363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.994 [2024-12-06 13:37:00.487564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.994 [2024-12-06 13:37:00.488027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.994 [2024-12-06 13:37:00.488041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.994 [2024-12-06 13:37:00.488047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.994 [2024-12-06 13:37:00.488215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.994 [2024-12-06 13:37:00.488384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.994 [2024-12-06 13:37:00.488391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.994 [2024-12-06 13:37:00.488397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.994 [2024-12-06 13:37:00.488402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.994 [2024-12-06 13:37:00.500587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.994 [2024-12-06 13:37:00.501066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.994 [2024-12-06 13:37:00.501079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.995 [2024-12-06 13:37:00.501085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.995 [2024-12-06 13:37:00.501253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.995 [2024-12-06 13:37:00.501422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.995 [2024-12-06 13:37:00.501429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.995 [2024-12-06 13:37:00.501435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.995 [2024-12-06 13:37:00.501441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.995 [2024-12-06 13:37:00.513635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.995 [2024-12-06 13:37:00.514074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.995 [2024-12-06 13:37:00.514087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.995 [2024-12-06 13:37:00.514093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.995 [2024-12-06 13:37:00.514261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.995 [2024-12-06 13:37:00.514429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.995 [2024-12-06 13:37:00.514438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.995 [2024-12-06 13:37:00.514443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.995 [2024-12-06 13:37:00.514448] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.995 [2024-12-06 13:37:00.526634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.995 [2024-12-06 13:37:00.527124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.995 [2024-12-06 13:37:00.527137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.995 [2024-12-06 13:37:00.527144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.995 [2024-12-06 13:37:00.527312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.995 [2024-12-06 13:37:00.527486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.995 [2024-12-06 13:37:00.527493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.995 [2024-12-06 13:37:00.527498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.995 [2024-12-06 13:37:00.527503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.995 [2024-12-06 13:37:00.539680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.995 [2024-12-06 13:37:00.540156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.995 [2024-12-06 13:37:00.540169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.995 [2024-12-06 13:37:00.540175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.995 [2024-12-06 13:37:00.540343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.995 [2024-12-06 13:37:00.540516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.995 [2024-12-06 13:37:00.540524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.995 [2024-12-06 13:37:00.540529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.995 [2024-12-06 13:37:00.540533] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.995 [2024-12-06 13:37:00.552707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.995 [2024-12-06 13:37:00.553179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.995 [2024-12-06 13:37:00.553194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.995 [2024-12-06 13:37:00.553200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.995 [2024-12-06 13:37:00.553368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.995 [2024-12-06 13:37:00.553541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.995 [2024-12-06 13:37:00.553548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.995 [2024-12-06 13:37:00.553553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.995 [2024-12-06 13:37:00.553558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.995 [2024-12-06 13:37:00.565744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.995 [2024-12-06 13:37:00.566210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.995 [2024-12-06 13:37:00.566223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.995 [2024-12-06 13:37:00.566232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.995 [2024-12-06 13:37:00.566399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.995 [2024-12-06 13:37:00.566571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.995 [2024-12-06 13:37:00.566578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.995 [2024-12-06 13:37:00.566584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.995 [2024-12-06 13:37:00.566589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.995 [2024-12-06 13:37:00.578767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.995 [2024-12-06 13:37:00.579215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.995 [2024-12-06 13:37:00.579229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.995 [2024-12-06 13:37:00.579234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.995 [2024-12-06 13:37:00.579402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.995 [2024-12-06 13:37:00.579573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.995 [2024-12-06 13:37:00.579581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.995 [2024-12-06 13:37:00.579586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.995 [2024-12-06 13:37:00.579591] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.995 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.995 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:13.995 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.995 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.995 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.995 [2024-12-06 13:37:00.591776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.995 [2024-12-06 13:37:00.592249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.995 [2024-12-06 13:37:00.592262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.995 [2024-12-06 13:37:00.592269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.995 [2024-12-06 13:37:00.592437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.995 [2024-12-06 13:37:00.592609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.995 [2024-12-06 13:37:00.592617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.995 [2024-12-06 13:37:00.592622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.995 [2024-12-06 13:37:00.592627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.995 [2024-12-06 13:37:00.604809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.995 [2024-12-06 13:37:00.605275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.995 [2024-12-06 13:37:00.605291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.995 [2024-12-06 13:37:00.605297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.995 [2024-12-06 13:37:00.605469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.995 [2024-12-06 13:37:00.605638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.996 [2024-12-06 13:37:00.605645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.996 [2024-12-06 13:37:00.605650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.996 [2024-12-06 13:37:00.605655] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.996 [2024-12-06 13:37:00.617830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.996 [2024-12-06 13:37:00.618269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.996 [2024-12-06 13:37:00.618282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.996 [2024-12-06 13:37:00.618288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.996 [2024-12-06 13:37:00.618462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.996 [2024-12-06 13:37:00.618631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.996 [2024-12-06 13:37:00.618638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.996 [2024-12-06 13:37:00.618644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.996 [2024-12-06 13:37:00.618649] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.996 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.996 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:13.996 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.996 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.996 [2024-12-06 13:37:00.630826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.996 [2024-12-06 13:37:00.631301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.996 [2024-12-06 13:37:00.631313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.996 [2024-12-06 13:37:00.631319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.996 [2024-12-06 13:37:00.631492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.996 [2024-12-06 13:37:00.631662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.996 [2024-12-06 13:37:00.631668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.996 [2024-12-06 13:37:00.631673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.996 [2024-12-06 13:37:00.631678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:13.996 [2024-12-06 13:37:00.634493] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.996 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.996 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:13.996 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.996 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:13.996 [2024-12-06 13:37:00.643849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:13.996 [2024-12-06 13:37:00.644289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.996 [2024-12-06 13:37:00.644303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:13.996 [2024-12-06 13:37:00.644308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:13.996 [2024-12-06 13:37:00.644480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:13.996 [2024-12-06 13:37:00.644648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:13.996 [2024-12-06 13:37:00.644655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:13.996 [2024-12-06 13:37:00.644660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:13.996 [2024-12-06 13:37:00.644665] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.258 [2024-12-06 13:37:00.656842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.258 [2024-12-06 13:37:00.657318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-12-06 13:37:00.657331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:14.258 [2024-12-06 13:37:00.657337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:14.258 [2024-12-06 13:37:00.657508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:14.258 [2024-12-06 13:37:00.657676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.258 [2024-12-06 13:37:00.657684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.258 [2024-12-06 13:37:00.657690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.258 [2024-12-06 13:37:00.657695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.258 [2024-12-06 13:37:00.669867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.258 [2024-12-06 13:37:00.670354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-12-06 13:37:00.670367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:14.258 [2024-12-06 13:37:00.670373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:14.258 [2024-12-06 13:37:00.670545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:14.258 [2024-12-06 13:37:00.670713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.258 [2024-12-06 13:37:00.670720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.258 [2024-12-06 13:37:00.670726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.258 [2024-12-06 13:37:00.670738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.258 Malloc0 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.258 [2024-12-06 13:37:00.682872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.258 [2024-12-06 13:37:00.683295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-12-06 13:37:00.683309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:14.258 [2024-12-06 13:37:00.683315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:14.258 [2024-12-06 13:37:00.683486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:14.258 [2024-12-06 13:37:00.683654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.258 [2024-12-06 13:37:00.683661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.258 [2024-12-06 13:37:00.683666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.258 [2024-12-06 13:37:00.683671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.258 [2024-12-06 13:37:00.695842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.258 [2024-12-06 13:37:00.696322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.258 [2024-12-06 13:37:00.696335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2451c20 with addr=10.0.0.2, port=4420 00:29:14.258 [2024-12-06 13:37:00.696341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451c20 is same with the state(6) to be set 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.258 [2024-12-06 13:37:00.696513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2451c20 (9): Bad file descriptor 00:29:14.258 [2024-12-06 13:37:00.696683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.258 [2024-12-06 13:37:00.696689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.258 [2024-12-06 13:37:00.696694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.258 [2024-12-06 13:37:00.696700] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:14.258 [2024-12-06 13:37:00.703473] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.258 13:37:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2332062 00:29:14.258 [2024-12-06 13:37:00.708880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.258 [2024-12-06 13:37:00.772739] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:15.767 4916.71 IOPS, 19.21 MiB/s [2024-12-06T12:37:03.367Z] 5927.38 IOPS, 23.15 MiB/s [2024-12-06T12:37:04.309Z] 6664.89 IOPS, 26.03 MiB/s [2024-12-06T12:37:05.262Z] 7302.90 IOPS, 28.53 MiB/s [2024-12-06T12:37:06.645Z] 7814.64 IOPS, 30.53 MiB/s [2024-12-06T12:37:07.583Z] 8222.92 IOPS, 32.12 MiB/s [2024-12-06T12:37:08.523Z] 8601.38 IOPS, 33.60 MiB/s [2024-12-06T12:37:09.467Z] 8903.14 IOPS, 34.78 MiB/s 00:29:22.808 Latency(us) 00:29:22.808 [2024-12-06T12:37:09.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.808 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:22.808 Verification LBA range: start 0x0 length 0x4000 00:29:22.808 Nvme1n1 : 15.01 9159.81 35.78 12232.48 0.00 5963.71 590.51 13817.17 00:29:22.808 [2024-12-06T12:37:09.467Z] =================================================================================================================== 00:29:22.808 [2024-12-06T12:37:09.467Z] Total : 9159.81 35.78 12232.48 0.00 5963.71 590.51 13817.17 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:22.808 rmmod nvme_tcp 00:29:22.808 rmmod nvme_fabrics 00:29:22.808 rmmod nvme_keyring 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2333106 ']' 00:29:22.808 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2333106 00:29:22.809 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2333106 ']' 00:29:22.809 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2333106 00:29:22.809 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:29:22.809 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:22.809 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2333106 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2333106' 00:29:23.068 killing process with pid 2333106 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2333106 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2333106 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.068 13:37:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.666 13:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:25.666 00:29:25.666 real 0m28.365s 00:29:25.666 user 1m3.924s 00:29:25.666 sys 0m7.722s 00:29:25.666 13:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.666 13:37:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.666 ************************************ 00:29:25.666 END TEST nvmf_bdevperf 00:29:25.666 ************************************ 00:29:25.666 13:37:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:25.666 13:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:25.666 13:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.666 13:37:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.666 ************************************ 00:29:25.666 START TEST nvmf_target_disconnect 00:29:25.666 ************************************ 00:29:25.666 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:25.666 * Looking for test storage... 00:29:25.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:25.666 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:25.666 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:25.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.667 --rc genhtml_branch_coverage=1 00:29:25.667 --rc genhtml_function_coverage=1 00:29:25.667 --rc genhtml_legend=1 00:29:25.667 --rc geninfo_all_blocks=1 00:29:25.667 --rc geninfo_unexecuted_blocks=1 00:29:25.667 00:29:25.667 ' 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:25.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.667 --rc genhtml_branch_coverage=1 00:29:25.667 --rc genhtml_function_coverage=1 00:29:25.667 --rc genhtml_legend=1 00:29:25.667 --rc geninfo_all_blocks=1 00:29:25.667 --rc geninfo_unexecuted_blocks=1 00:29:25.667 00:29:25.667 ' 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:25.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.667 --rc genhtml_branch_coverage=1 00:29:25.667 --rc genhtml_function_coverage=1 00:29:25.667 --rc genhtml_legend=1 00:29:25.667 --rc geninfo_all_blocks=1 00:29:25.667 --rc geninfo_unexecuted_blocks=1 00:29:25.667 00:29:25.667 ' 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:25.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.667 --rc genhtml_branch_coverage=1 00:29:25.667 --rc genhtml_function_coverage=1 00:29:25.667 --rc genhtml_legend=1 00:29:25.667 --rc geninfo_all_blocks=1 00:29:25.667 --rc geninfo_unexecuted_blocks=1 00:29:25.667 00:29:25.667 ' 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.667 13:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.667 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:25.667 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:25.667 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.667 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.667 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.667 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.667 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.667 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:25.667 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.667 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.667 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.667 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:25.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:25.668 13:37:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.949 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:33.950 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:33.950 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:33.950 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:33.950 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:33.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:29:33.950 00:29:33.950 --- 10.0.0.2 ping statistics --- 00:29:33.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.950 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:29:33.950 00:29:33.950 --- 10.0.0.1 ping statistics --- 00:29:33.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.950 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:33.950 ************************************ 00:29:33.950 START TEST nvmf_target_disconnect_tc1 00:29:33.950 ************************************ 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:33.950 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:33.951 [2024-12-06 13:37:19.685066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.951 [2024-12-06 13:37:19.685168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdf5ae0 with addr=10.0.0.2, port=4420 00:29:33.951 [2024-12-06 13:37:19.685205] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:33.951 [2024-12-06 13:37:19.685223] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:33.951 [2024-12-06 13:37:19.685232] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:33.951 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:33.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:33.951 Initializing NVMe Controllers 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:33.951 00:29:33.951 real 0m0.142s 00:29:33.951 user 0m0.057s 00:29:33.951 sys 0m0.085s 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:33.951 ************************************ 00:29:33.951 END TEST nvmf_target_disconnect_tc1 00:29:33.951 ************************************ 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:33.951 ************************************ 00:29:33.951 START TEST nvmf_target_disconnect_tc2 00:29:33.951 ************************************ 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2339157 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2339157 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2339157 ']' 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.951 13:37:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.951 [2024-12-06 13:37:19.845499] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:29:33.951 [2024-12-06 13:37:19.845558] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.951 [2024-12-06 13:37:19.944850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.951 [2024-12-06 13:37:19.997705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.951 [2024-12-06 13:37:19.997753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.951 [2024-12-06 13:37:19.997762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.951 [2024-12-06 13:37:19.997770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.951 [2024-12-06 13:37:19.997776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.951 [2024-12-06 13:37:19.999762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:33.951 [2024-12-06 13:37:19.999901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:33.951 [2024-12-06 13:37:20.000060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:33.951 [2024-12-06 13:37:20.000060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.213 Malloc0 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.213 [2024-12-06 13:37:20.757668] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.213 [2024-12-06 13:37:20.798094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2339507 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:34.213 13:37:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.785 13:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2339157 00:29:36.785 13:37:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Write completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 [2024-12-06 13:37:22.837011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.785 starting I/O failed 00:29:36.785 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 [2024-12-06 13:37:22.837329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.786 [2024-12-06 13:37:22.837694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.837717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.838084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.838096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.838378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.838389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.838723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.838737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.839101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.839112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.839306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.839318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.839709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.839767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.840191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.840205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.840745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.840805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.841116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.841129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Write completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 Read completed with error (sct=0, sc=8) 00:29:36.786 starting I/O failed 00:29:36.786 [2024-12-06 13:37:22.841430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.786 [2024-12-06 13:37:22.841796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.841872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.842239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.842256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.842719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.842784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.843200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.843217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.843689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.843755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.844133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.844150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.844474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.844489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.844702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.786 [2024-12-06 13:37:22.844719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.786 qpair failed and we were unable to recover it. 00:29:36.786 [2024-12-06 13:37:22.845037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.845051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.845378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.845392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.845520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.845536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.845765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.845780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.846151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.846166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.846461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.846476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.846737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.846752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.847081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.847095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.847425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.847439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.847687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.847702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.848002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.848014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.848379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.848392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.848707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.848720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.849036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.849048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.849395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.849410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.849604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.849621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.849785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.849798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.850152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.850167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.850510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.850523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.850836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.850849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.851211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.851224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.851413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.851425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.851756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.851770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.852069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.852082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.852429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.852442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.852778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.852798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.853127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.853140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.853470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.853485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.853872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.853885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.854212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.854225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.854540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.854553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.854893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.854905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.855208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.855221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.855589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.855602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.855928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.855941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.856299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.856311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.856640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.856653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.857010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.787 [2024-12-06 13:37:22.857022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.787 qpair failed and we were unable to recover it. 00:29:36.787 [2024-12-06 13:37:22.857201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.857215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.857573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.857587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.857885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.857898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.858214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.858227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.858563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.858575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.858896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.858909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.859259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.859271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.859593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.859606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.859922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.859934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.860288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.860301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.860652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.860665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.860972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.860985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.861309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.861321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.861677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.861691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.862043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.862055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.862249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.862262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.862582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.862596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.862911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.862924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.863276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.863288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.863600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.863613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.863967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.863979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.864324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.864336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.864671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.864683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.864897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.864911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.865252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.865267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.865614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.865628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.865952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.865966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.866360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.866377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.866705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.866719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.867028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.867042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.867223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.867238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.788 qpair failed and we were unable to recover it. 00:29:36.788 [2024-12-06 13:37:22.867589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.788 [2024-12-06 13:37:22.867603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.867936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.867949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.868276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.868289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.868636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.868649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.869015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.869028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.869375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.869388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.869744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.869758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.870100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.870114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.870421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.870434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.870764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.870777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.871088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.871101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.871425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.871438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.871750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.871764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.871964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.871979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.872270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.872284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.872641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.872655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.873006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.873019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.873365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.873378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.873709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.873725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.874040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.874054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.874389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.874401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.874715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.874728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.875047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.875062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.875385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.875398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.875726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.875739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.876062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.876075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.876395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.876407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.876762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.876776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.877126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.877140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.877475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.877490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.877861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.877875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.878205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.878218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.878527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.878540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.878725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.878737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.879076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.879089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.879436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.879450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.879809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.879826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.789 [2024-12-06 13:37:22.880140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.789 [2024-12-06 13:37:22.880153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.789 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.880506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.880519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.880850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.880864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.881185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.881198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.881518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.881530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.881889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.881902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.882219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.882231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.882573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.882586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.882897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.882909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.883221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.883235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.883566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.883579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.883900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.883913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.884249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.884264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.884582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.884595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.884922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.884936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.885263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.885276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.885622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.885637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.885949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.885962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.886146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.886159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.886373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.886387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.886719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.886732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.887067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.887080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.887428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.887441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.887776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.887790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.888106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.888118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.888435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.888448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.888804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.888819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.889154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.889166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.889510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.889525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.889839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.889852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.890189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.890204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.890522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.890536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.890865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.890880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.891203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.891216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.891560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.891575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.891909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.891921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.892253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.892265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.892642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.892655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.892960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.790 [2024-12-06 13:37:22.892972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.790 qpair failed and we were unable to recover it. 00:29:36.790 [2024-12-06 13:37:22.893302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.893318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.893637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.893650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.893999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.894012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.894323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.894335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.894654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.894667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.894901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.894913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.895237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.895252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.895599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.895614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.895947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.895960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.896315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.896327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.896675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.896689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.896996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.897008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.897182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.897194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.897536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.897550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.897887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.897900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.898102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.898116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.898467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.898480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.898810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.898822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.899160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.899175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.899504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.899517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.899834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.899849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.900196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.900210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.900524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.900537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.900840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.900852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.901231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.901244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.901567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.901580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.901920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.901933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.902259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.902271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.902597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.902611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.902945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.902957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.903300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.903314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.903632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.903645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.904000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.904012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.904424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.904437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.904524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.904534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.904841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.904854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.905176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.905189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.791 [2024-12-06 13:37:22.905512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.791 [2024-12-06 13:37:22.905526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.791 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.905926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.905940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.906271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.906285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.906638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.906655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.907001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.907015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.907357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.907371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.907697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.907710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.908041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.908054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.908399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.908412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.908729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.908742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.909094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.909107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.909448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.909468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.909784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.909798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.910115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.910129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.910460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.910475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.910836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.910849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.911195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.911210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.911555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.911568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.911910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.911925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.912264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.912277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.912476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.912489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.912835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.912848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.913191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.913205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.913552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.913565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.913901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.913913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.914253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.914265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.914614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.914627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.914944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.914958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.915179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.915190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.915529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.915544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.915882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.915894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.916206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.916218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.916563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.916577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.916893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.916905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.792 qpair failed and we were unable to recover it. 00:29:36.792 [2024-12-06 13:37:22.917258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.792 [2024-12-06 13:37:22.917271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.917619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.917632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.917959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.917973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.918327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.918339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.918693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.918706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.919031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.919042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.919400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.919414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.919609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.919625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.919953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.919965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.920317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.920332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.920644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.920657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.921010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.921023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.921375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.921390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.921705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.921719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.922037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.922052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.922412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.922427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.922744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.922758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.923078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.923093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.923438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.923453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.923810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.923825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.924177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.924191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.924573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.924590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.924914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.924927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.793 [2024-12-06 13:37:22.925249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.793 [2024-12-06 13:37:22.925263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.793 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.925600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.925615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.925972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.925985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.926290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.926302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.926657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.926670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.926981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.926996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.927346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.927359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.927756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.927769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.928086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.928099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.928444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.928463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.928811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.928825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.929134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.929147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.929476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.929490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.929826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.929841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.930182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.930195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.930394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.930406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.930806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.930820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.931129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.931144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.931511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.931525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.931866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.931880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.932221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.932234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.932562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.932575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.932885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.932898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.933221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.933235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.933549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.933562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.933938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.933952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.934140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.934157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.934463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.934477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.934670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.934682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.935018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.935031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.935347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.935360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.935678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.935695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.936030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.936046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.936392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.936406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.936815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.936829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.794 [2024-12-06 13:37:22.937171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.794 [2024-12-06 13:37:22.937186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.794 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.937530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.937544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.937781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.937793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.938139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.938152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.938478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.938490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.938811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.938825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.939147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.939162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.939483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.939497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.939850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.939864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.940208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.940222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.940538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.940553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.940802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.940817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.941160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.941175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.941533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.941549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.941891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.941907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.942225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.942240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.942574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.942587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.942946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.942961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.943308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.943321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.943654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.943668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.944020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.944034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.944372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.944385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.944701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.944714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.944931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.944946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.945257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.945273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.945620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.945633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.945971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.945986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.946312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.946325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.946673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.946689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.947036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.947049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.947236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.947250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.947486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.947504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.947745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.947762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.948080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.948094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.948419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.948434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.948771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.948784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.949126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.949142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.949497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.949512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.795 qpair failed and we were unable to recover it. 00:29:36.795 [2024-12-06 13:37:22.949844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.795 [2024-12-06 13:37:22.949858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.950201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.950216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.950560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.950573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.950912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.950927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.951261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.951276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.951631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.951646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.951995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.952010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.952396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.952411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.952743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.952757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.953069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.953082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.953406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.953422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.953744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.953758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.954105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.954119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.954475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.954489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.954840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.954855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.955198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.955213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.955524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.955537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.955884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.955897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.956235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.956248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.956602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.956617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.956937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.956950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.957293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.957307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.957630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.957644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.957960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.957974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.958323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.958336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.958686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.958702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.959016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.959029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.959374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.959389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.959728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.959741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.960088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.960103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.960464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.960477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.960801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.960815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.961159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.961172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.961353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.961369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.961721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.961733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.962051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.962063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.962400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.962414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.962729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.962743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.796 [2024-12-06 13:37:22.963062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.796 [2024-12-06 13:37:22.963076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.796 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.963420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.963435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.963793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.963808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.963994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.964007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.964329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.964343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.964689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.964704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.965051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.965065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.965435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.965449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.965786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.965802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.966110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.966124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.966478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.966493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.966833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.966847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.967159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.967173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.967533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.967546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.967892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.967907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.968086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.968102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.968429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.968442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.968768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.968783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.969118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.969130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.969335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.969347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.969678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.969692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.970016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.970031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.970375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.970389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.970738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.970753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.971067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.971080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.971444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.971463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.971776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.971790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.972107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.972120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.972466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.972481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.972821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.972837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.973176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.973191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.973524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.973537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.973757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.973769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.974081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.974095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.974433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.974448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.974789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.974806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.975121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.975134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.975486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.975499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.797 [2024-12-06 13:37:22.975872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.797 [2024-12-06 13:37:22.975887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.797 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.976234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.976246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.976587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.976602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.976946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.976959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.977274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.977287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.977638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.977653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.978035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.978048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.978354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.978365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.978729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.978743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.979083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.979097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.979437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.979451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.979801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.979814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.980148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.980163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.980492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.980507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.980820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.980834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.981179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.981194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.981402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.981415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.981760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.981776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.982121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.982134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.982466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.982481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.982671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.982685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.983069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.983082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.983428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.983440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.983800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.983813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.984158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.984172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.984517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.984531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.984816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.984830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.985160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.985173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.985496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.985509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.985727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.985740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.986061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.986075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.986430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.986443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.986788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.798 [2024-12-06 13:37:22.986802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.798 qpair failed and we were unable to recover it. 00:29:36.798 [2024-12-06 13:37:22.986987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.987002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.987392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.987407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.987757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.987772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.988438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.988480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.988824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.988838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.989201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.989216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.989418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.989433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.989747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.989761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.990113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.990127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.990472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.990485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.990848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.990861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.991199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.991213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.991560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.991573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.991895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.991912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.992235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.992249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.992565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.992578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.992904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.992917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.993277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.993292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.993642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.993655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.993979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.993992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.994357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.994371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.994663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.994675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.994905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.994919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.995238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.995252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.995634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.995648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.995844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.995857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.996089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.996102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.996290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.996303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.996660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.996673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.997014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.997027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.997391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.997403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.997706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.997722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.998057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.998072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.998412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.998424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.998770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.998785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.999143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.999156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.999504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.999519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:22.999882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:22.999895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.799 qpair failed and we were unable to recover it. 00:29:36.799 [2024-12-06 13:37:23.000298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.799 [2024-12-06 13:37:23.000312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.000649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.000662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.000976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.000989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.001336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.001349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.001697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.001710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.002037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.002050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.002396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.002409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.002610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.002624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.002955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.002968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.003322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.003334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.003680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.003694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.004053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.004065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.004415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.004427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.004729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.004742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.005105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.005119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.005466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.005480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.005825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.005839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.006162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.006176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.006365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.006377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.006712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.006726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.006932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.006945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.007288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.007302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.007504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.007518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.007832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.007845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.008201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.008214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.008535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.008548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.008885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.008898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.009232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.009246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.009592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.009606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.009934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.009948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.010306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.010319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.010685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.010700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.011039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.011052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.011272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.011288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.011604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.011618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.011959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.011972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.012326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.012339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.012715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.800 [2024-12-06 13:37:23.012728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.800 qpair failed and we were unable to recover it. 00:29:36.800 [2024-12-06 13:37:23.013027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.013039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.013396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.013409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.013711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.013724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.014044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.014057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.014412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.014426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.014788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.014802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.015125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.015140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.015482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.015495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.015849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.015864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.016182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.016195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.016558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.016573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.016918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.016931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.017278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.017291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.017639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.017653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.017989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.018001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.018321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.018334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.018602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.018616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.018933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.018946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.019281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.019293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.019622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.019636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.019944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.019957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.020309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.020321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.020641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.020654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.020965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.020979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.021315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.021327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.021567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.021580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.021942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.021957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.022300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.022313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.022655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.022669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.023010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.023023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.023219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.023232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.023571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.023585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.023939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.023953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.024296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.024309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.024644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.024657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.025008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.025024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.025283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.025295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.025595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.801 [2024-12-06 13:37:23.025608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.801 qpair failed and we were unable to recover it. 00:29:36.801 [2024-12-06 13:37:23.025933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.025946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.026162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.026176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.026504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.026517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.026846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.026858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.027207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.027220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.027575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.027588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.027959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.027972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.028181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.028194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.028533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.028546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.028856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.028870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.029240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.029253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.029588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.029601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.029925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.029938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.030278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.030291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.030632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.030645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.030846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.030860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.031215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.031228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.031578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.031593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.031939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.031952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.032267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.032281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.032630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.032643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.032984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.032998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.033342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.033356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.033705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.033717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.034040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.034053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.034400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.034413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.034754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.034767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.035118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.035130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.035446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.035465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.035817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.035829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.036173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.036186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.036531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.036544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.036908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.036921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.037270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.037285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.037611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.037624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.037962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.037974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.038286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.038299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.038647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.802 [2024-12-06 13:37:23.038664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.802 qpair failed and we were unable to recover it. 00:29:36.802 [2024-12-06 13:37:23.039049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.039062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.039249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.039261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.039574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.039590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.039907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.039921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.040218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.040229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.040317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.040328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.040615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.040629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.040939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.040952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.041164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.041177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.041474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.041487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.041831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.041844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.042186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.042200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.042555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.042568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.042875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.042889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.043231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.043244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.043572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.043585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.043927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.043940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.044290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.044302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.044640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.044653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.045030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.045043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.045240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.045252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.045590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.045604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.045786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.045798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.046131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.046144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.046486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.046501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.046842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.046856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.047204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.047217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.047570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.047584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.047806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.047820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.048199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.048212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.048538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.048551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.048905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.048918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.049272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.803 [2024-12-06 13:37:23.049286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.803 qpair failed and we were unable to recover it. 00:29:36.803 [2024-12-06 13:37:23.049609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.049622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.049954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.049967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.050311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.050324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.050666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.050679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.051018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.051032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.051462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.051475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.051801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.051816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.052167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.052182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.052399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.052413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.052704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.052717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.053017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.053030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.053356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.053370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.053677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.053690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.054035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.054047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.054396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.054411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.054744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.054759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.055086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.055100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.055437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.055451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.055787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.055801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.056110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.056124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.056460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.056474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.056667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.056680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.056987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.057001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.057216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.057229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.057471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.057484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.057683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.057696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.057874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.057888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.058231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.058245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.058587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.058601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.058928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.058940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.059262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.059275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.059629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.059643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.059973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.059987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.060317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.060331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.060514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.060528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.060724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.060738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.061128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.061142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.061478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.804 [2024-12-06 13:37:23.061493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.804 qpair failed and we were unable to recover it. 00:29:36.804 [2024-12-06 13:37:23.061842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.061855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.062224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.062237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.062579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.062592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.062925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.062937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.063258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.063272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.063624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.063637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.063807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.063821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.064204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.064217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.064525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.064541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.064873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.064886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.065217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.065229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.065555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.065568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.065893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.065905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.066219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.066232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.066581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.066594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.066921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.066933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.067273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.067288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.067635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.067648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.067972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.067984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.068332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.068346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.068674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.068687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.069040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.069053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.069396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.069408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.069723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.069737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.070058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.070072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.070375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.070390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.070652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.070666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.070981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.070996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.071331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.071345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.071650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.071665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.071982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.071996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.072325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.072339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.072647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.072661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.072973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.072988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.073330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.073345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.073682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.073697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.074040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.074058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.074415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.805 [2024-12-06 13:37:23.074429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.805 qpair failed and we were unable to recover it. 00:29:36.805 [2024-12-06 13:37:23.074748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.074763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.075140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.075155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.075462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.075477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.075700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.075714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.076035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.076048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.076284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.076297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.076568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.076581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.076935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.076949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.077270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.077284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.077632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.077644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.077972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.077987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.078324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.078338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.078730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.078746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.079068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.079081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.079438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.079451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.079789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.079803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.080122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.080134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.080446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.080465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.080730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.080744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.081067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.081080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.081408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.081421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.081790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.081805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.082119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.082132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.082481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.082494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.082836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.082850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.083174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.083187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.083533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.083547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.083931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.083946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.084265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.084277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.084507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.084519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.084847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.084861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.085214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.085228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.085566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.085579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.085916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.085928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.086288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.086302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.086697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.086711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.087033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.087047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.087388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.806 [2024-12-06 13:37:23.087401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.806 qpair failed and we were unable to recover it. 00:29:36.806 [2024-12-06 13:37:23.087738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.087751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.088091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.088105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.088444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.088466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.088725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.088740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.089049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.089062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.089411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.089424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.089784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.089799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.090144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.090157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.090335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.090351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.090755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.090769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.091099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.091113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.091299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.091315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.091564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.091580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.091939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.091953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.092295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.092307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.092622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.092635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.092864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.092879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.093050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.093063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.093415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.093430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.093751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.093764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.094085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.094098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.094463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.094477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.094802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.094815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.095153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.095167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.095501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.095515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.095882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.095899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.096245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.096259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.096436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.096448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.096656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.096669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.097002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.097016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.097353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.097368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.097704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.097718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.098059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.807 [2024-12-06 13:37:23.098074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.807 qpair failed and we were unable to recover it. 00:29:36.807 [2024-12-06 13:37:23.098432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.098446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.098780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.098793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.098975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.098988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.099311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.099324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.099668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.099682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.100065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.100078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.100383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.100399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.100670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.100683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.101068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.101083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.101430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.101445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.101697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.101712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.102042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.102057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.102416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.102430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.102772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.102788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.103128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.103143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.103478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.103493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.103832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.103847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.104182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.104197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.104543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.104559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.104750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.104767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.105086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.105099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.105426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.105440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.105792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.105807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.106124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.106139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.106487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.106502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.106716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.106729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.106931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.106943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.107276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.107292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.107673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.107686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.108007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.108023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.108362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.108375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.108775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.108789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.109096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.109108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.109453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.109474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.109756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.109770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.808 [2024-12-06 13:37:23.110082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.808 [2024-12-06 13:37:23.110094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.808 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.110450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.110469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.110799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.110815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.111140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.111154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.111471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.111486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.111867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.111880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.112245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.112259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.112441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.112463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.112792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.112805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.113147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.113163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.113568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.113582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.113760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.113773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.114109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.114122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.114448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.114466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.114757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.114771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.115071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.115083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.115411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.115425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.115744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.115759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.116096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.116110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.116461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.116478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.116795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.116810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.117162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.117177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.117525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.117539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.117944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.117957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.118307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.118326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.118512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.118527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.118856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.118869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.119192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.119205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.119564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.119578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.119878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.119891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.120214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.120228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.120582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.120596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.120920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.120935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.121298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.121312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.121607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.809 [2024-12-06 13:37:23.121623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.809 qpair failed and we were unable to recover it. 00:29:36.809 [2024-12-06 13:37:23.121959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.121975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.122296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.122309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.122672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.122686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.123017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.123030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.123368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.123383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.123711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.123726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.124073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.124088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.124431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.124446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.124802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.124818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.124997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.125011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.125314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.125328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.125680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.125693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.125909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.125922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.126255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.126270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.126589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.126604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.126933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.126950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.127288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.127303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.127622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.127637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.127981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.127995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.128338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.128354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.128689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.128706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.128883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.128897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.129252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.129267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.129603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.129616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.129958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.129973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.130298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.130312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.130663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.130677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.131007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.131019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.131230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.131243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.131592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.131611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.131959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.131974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.132355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.132368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.132703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.132718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.133025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.133039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.133355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.133368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.133716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.133730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.134077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.134094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.134398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.810 [2024-12-06 13:37:23.134411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.810 qpair failed and we were unable to recover it. 00:29:36.810 [2024-12-06 13:37:23.134748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.134763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.135095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.135109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.135495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.135508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.135690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.135705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.136010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.136024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.136370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.136387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.136726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.136739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.137059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.137077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.137284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.137298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.137611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.137626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.137972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.137987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.138284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.138297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.138649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.138662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.139008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.139024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.139351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.139366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.139686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.139702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.139906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.139920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.140234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.140249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.140447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.140469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.140778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.140791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.141111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.141129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.141327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.141341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.141634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.141650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.142007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.142020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.142370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.142384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.142805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.142819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.142882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.142891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.143219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.143232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.143549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.143562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.143803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.143817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.144143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.144159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.144482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.144499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.144834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.144849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.145202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.145219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.811 [2024-12-06 13:37:23.145641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.811 [2024-12-06 13:37:23.145655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.811 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.145831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.145845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.146047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.146060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.146272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.146287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.146632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.146646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.146974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.146989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.147348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.147363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.147681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.147696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.148009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.148023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.148367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.148381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.148703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.148717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.149106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.149119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.149466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.149483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.149719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.149732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.150046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.150062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.150411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.150424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.150618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.150633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.812 [2024-12-06 13:37:23.150968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.812 [2024-12-06 13:37:23.150983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.812 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.151304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.151319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.151645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.151659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.152005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.152019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.152210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.152226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.152578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.152593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.152925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.152940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.153277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.153290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.153629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.153644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.153960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.153976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.154281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.154293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.154507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.154522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.154864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.154878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.155225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.155239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.155573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.155586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.155928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.155941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.156128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.156140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.156495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.156511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.156848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.156865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.157199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.157212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.157561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.157576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.157914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.157929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.158151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.158165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.158485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.158500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.158817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.158832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.159014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.159029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.159328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.159342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.159698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.159715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.160034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.160047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.160369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.160383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.160692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.160708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.161087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.813 [2024-12-06 13:37:23.161101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.813 qpair failed and we were unable to recover it. 00:29:36.813 [2024-12-06 13:37:23.161482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.161497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.161753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.161769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.162096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.162113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.162443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.162463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.162714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.162727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.163048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.163062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.163413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.163426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.163768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.163783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.164126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.164140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.164480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.164494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.164825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.164838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.165164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.165178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.165523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.165537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.165865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.165880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.166210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.166224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.166575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.166592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.166930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.166945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.167288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.167303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.167642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.167657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.167995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.168009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.168353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.168367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.168698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.168713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.168933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.168948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.169127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.169142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.169448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.169468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.169802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.169817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.170164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.170178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.170464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.170479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.170821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.170834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.171190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.171204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.171502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.171516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.171866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.171878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.172070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.172083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.172265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.172277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.172604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.172618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.172964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.814 [2024-12-06 13:37:23.172979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.814 qpair failed and we were unable to recover it. 00:29:36.814 [2024-12-06 13:37:23.173310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.173322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.173672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.173684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.173898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.173911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.174199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.174212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.174534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.174547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.174869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.174883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.175197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.175211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.175543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.175557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.175884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.175896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.176212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.176224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.176565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.176578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.176929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.176942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.177266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.177279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.177603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.177616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.177963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.177976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.178326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.178338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.178685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.178699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.179009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.179024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.179379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.179393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.179722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.179739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.180078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.180093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.180309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.180323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.180712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.180726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.181049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.181064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.181393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.181405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.181732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.181746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.182108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.182121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.182408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.182420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.182613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.182627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.182966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.182978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.183311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.183323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.183676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.183691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.184069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.184082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.184421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.184435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.184787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.815 [2024-12-06 13:37:23.184800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.815 qpair failed and we were unable to recover it. 00:29:36.815 [2024-12-06 13:37:23.185009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.185021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.185314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.185328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.185682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.185695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.186008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.186023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.186446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.186465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.186782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.186795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.187107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.187121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.187478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.187493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.187820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.187833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.188174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.188188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.188502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.188516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.188882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.188895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.189218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.189230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.189599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.189613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.189941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.189953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.190307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.190320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.190673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.190686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.191071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.191084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.191439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.191453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.191783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.191796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.192121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.192136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.192478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.192493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.192836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.192849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.193209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.193222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.193544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.193560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.193876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.193888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.194193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.194206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.194488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.194500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.194838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.194851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.195199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.195211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.195569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.195582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.195785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.195798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.195998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.196011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.196348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.196362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.196674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.196688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.197041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.197056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.816 [2024-12-06 13:37:23.197370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.816 [2024-12-06 13:37:23.197384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.816 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.197585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.197600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.197927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.197941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.198284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.198297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.198647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.198660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.199001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.199014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.199369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.199384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.199601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.199615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.199964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.199979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.200198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.200213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.200536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.200550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.200861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.200875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.201222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.201236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.201581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.201597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.201916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.201930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.202255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.202269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.202611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.202624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.202938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.202953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.203153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.203167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.203357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.203371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.203666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.203680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.203872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.203886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.204219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.204233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.204615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.204629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.204946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.204961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.205297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.205310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.205636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.205649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.205993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.206007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.206330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.206346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.206711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.206725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.207071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.207083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.207398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.207411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.207735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.207748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.817 qpair failed and we were unable to recover it. 00:29:36.817 [2024-12-06 13:37:23.207939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.817 [2024-12-06 13:37:23.207953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.208294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.208307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.208663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.208677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.209006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.209019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.209367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.209380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.209707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.209720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.210075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.210088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.210428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.210440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.210757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.210771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.211091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.211104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.211415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.211428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.211748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.211761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.212097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.212110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.212432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.212445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.212797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.212810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.213155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.213167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.213511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.213525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.213885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.213897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.214077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.214092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.214433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.214447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.214640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.214654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.214960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.214972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.215175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.215190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.215359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.215374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.215710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.215726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.216072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.216086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.216428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.216443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.216786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.216801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.217145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.217160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.217382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.217397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.217732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.217747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.218067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.218081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.218413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.218427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.218743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.218758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.219059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.219073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.219290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.219307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.219634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.219648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.818 [2024-12-06 13:37:23.219825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.818 [2024-12-06 13:37:23.219837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.818 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.220151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.220164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.220405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.220417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.220741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.220754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.221111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.221123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.221473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.221486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.221837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.221850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.222197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.222211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.222568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.222580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.222767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.222781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.222988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.223001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.223316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.223329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.223510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.223523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.223882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.223895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.224243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.224258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.224578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.224591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.224920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.224934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.225277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.225290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.225476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.225487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.225821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.225834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.226158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.226172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.226537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.226565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.226904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.226917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.227264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.227277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.227478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.227491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.227817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.227832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.228160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.228173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.228498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.228510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.228838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.228851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.229200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.229213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.229558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.229571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.229871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.229883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.230083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.230096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.230418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.230431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.230631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.230643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.819 qpair failed and we were unable to recover it. 00:29:36.819 [2024-12-06 13:37:23.231004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.819 [2024-12-06 13:37:23.231017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.231334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.231346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.231711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.231724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.232073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.232088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.232430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.232443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.232803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.232815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.233165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.233178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.233536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.233549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.233878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.233892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.234240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.234254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.234599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.234613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.234961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.234973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.235313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.235326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.235673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.235688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.236026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.236038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.236349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.236362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.236721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.236734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.237053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.237065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.237381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.237395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.237608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.237620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.237947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.237960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.238308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.238320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.238647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.238660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.238988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.239001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.239190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.239203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.239541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.239554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.239852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.239864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.240214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.240229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.240530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.240544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.240911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.240923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.241249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.241262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.241645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.241658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.241992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.242005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.242323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.242335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.242672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.242686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.243025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.243039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.243387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.243402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.243757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.243771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.820 qpair failed and we were unable to recover it. 00:29:36.820 [2024-12-06 13:37:23.244115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.820 [2024-12-06 13:37:23.244130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.244467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.244482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.244826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.244839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.245237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.245249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.245561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.245575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.245955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.245971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.246176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.246189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.246515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.246527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.246876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.246889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.247105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.247117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.247403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.247417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.247723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.247737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.248062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.248074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.248413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.248427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.248744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.248757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.249101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.249114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.249417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.249430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.249787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.249801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.250130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.250144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.250464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.250478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.250833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.250846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.251168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.251181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.251506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.251520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.251866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.251880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.252079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.252094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.252299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.252313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.252622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.252636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.252957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.252970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.253327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.253340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.253669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.253683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.254007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.254022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.254337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.254351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.254684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.254697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.255057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.255069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.255420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.255433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.255746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.255760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.256111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.256123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.256442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.256460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.821 qpair failed and we were unable to recover it. 00:29:36.821 [2024-12-06 13:37:23.256800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.821 [2024-12-06 13:37:23.256812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.257166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.257179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.257503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.257516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.257841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.257854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.258189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.258202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.258524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.258538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.258748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.258763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.259087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.259105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.259452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.259477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.259804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.259816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.260169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.260181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.260524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.260537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.260858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.260871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.261187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.261199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.261557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.261571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.261897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.261909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.262248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.262261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.262579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.262592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.262916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.262930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.263116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.263128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.263471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.263485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.263801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.263814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.264000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.264013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.264338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.264351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.264702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.264716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.265031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.265043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.265393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.265405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.265609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.265621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.265970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.265983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.266197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.266209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.266476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.266488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.266828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.266842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.267192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.267205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.822 [2024-12-06 13:37:23.267523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.822 [2024-12-06 13:37:23.267536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.822 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.267713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.267725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.268023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.268035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.268388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.268401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.268752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.268767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.269119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.269132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.269494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.269507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.269833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.269845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.270163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.270175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.270525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.270538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.270886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.270899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.271221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.271234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.271555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.271568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.271890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.271903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.272241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.272256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.272681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.272694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.273018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.273033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.273347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.273360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.273691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.273704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.274091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.274104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.274461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.274475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.274690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.274704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.275041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.275055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.275382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.275395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.275765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.275778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.276090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.276102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.276323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.276335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.276661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.276675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.277023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.277038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.277347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.277361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.277644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.277657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.277986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.277999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.278337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.278349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.278671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.278685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.279030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.279043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.279386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.279400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.279723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.279738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.280071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.280083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.280389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.823 [2024-12-06 13:37:23.280402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.823 qpair failed and we were unable to recover it. 00:29:36.823 [2024-12-06 13:37:23.280737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.280751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.280948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.280959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.281267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.281280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.281482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.281497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.281794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.281808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.282185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.282197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.282376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.282389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.282721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.282734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.283062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.283075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.283427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.283438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.283760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.283774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.284098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.284111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.284464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.284478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.284806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.284817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.284993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.285006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.285357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.285373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.285691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.285704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.286018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.286031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.286327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.286339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.286690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.286704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.286894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.286905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.287210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.287222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.287405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.287419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.287747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.287762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.288101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.288114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.288430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.288443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.288621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.288636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.288928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.288941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.289290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.289304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.289655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.289669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.290018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.290032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.290380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.290394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.290741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.290755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.291069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.291081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.291435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.291450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.291805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.291820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.292164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.824 [2024-12-06 13:37:23.292178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.824 qpair failed and we were unable to recover it. 00:29:36.824 [2024-12-06 13:37:23.292486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.292501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.292830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.292844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.293168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.293182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.293531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.293545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.293910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.293922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.294264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.294277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.294610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.294622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.294943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.294956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.295314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.295326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.295668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.295682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.296002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.296014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.296369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.296382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.296704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.296717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.297043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.297056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.297388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.297400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.297603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.297617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.297990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.298002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.298202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.298215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.298541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.298557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.298760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.298772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.299104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.299117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.299474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.299486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.299823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.299837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.300174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.300186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.300450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.300468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.300796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.300808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.301170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.301184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.301515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.301529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.301916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.301930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.302250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.302263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.302602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.302615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.302966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.302980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.303337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.303351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.303660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.303675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.303855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.303865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.304204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.304217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.304527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.304541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.304856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.304868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.825 qpair failed and we were unable to recover it. 00:29:36.825 [2024-12-06 13:37:23.305209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.825 [2024-12-06 13:37:23.305222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.305566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.305579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.305902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.305915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.306269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.306282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.306611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.306625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.306972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.306984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.307336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.307349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.307678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.307691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.307893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.307907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.308245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.308259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.308614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.308628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.308969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.308982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.309310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.309324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.309671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.309685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.310005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.310017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.310363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.310375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.310700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.310713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.311054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.311067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.311294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.311306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.311625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.311639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.311975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.311992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.312343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.312356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.312539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.312551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.312851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.312865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.313204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.313218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.313552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.313566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.313888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.313903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.314254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.314266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.314467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.314480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.314713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.314726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.314922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.314936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.315139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.315152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.315492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.315505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.315846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.315859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.316177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.316189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.316548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.316562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.316895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.316908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.317259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.317272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.826 [2024-12-06 13:37:23.317547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.826 [2024-12-06 13:37:23.317559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.826 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.317918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.317932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.318138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.318151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.318439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.318452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.318784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.318799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.319148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.319161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.319529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.319542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.319928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.319941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.320256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.320268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.320609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.320622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.320947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.320959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.321285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.321298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.321533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.321546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.321862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.321875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.322186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.322198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.322541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.322554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.322868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.322880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.323209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.323222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.323547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.323559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.323841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.323854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.324197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.324211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.324558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.324571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.324910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.324925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.325375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.325389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.325738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.325753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.326101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.326114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.326464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.326477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.326813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.326826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.327140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.327152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.327505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.327518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.327698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.327711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.328055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.328068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.328396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.827 [2024-12-06 13:37:23.328409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.827 qpair failed and we were unable to recover it. 00:29:36.827 [2024-12-06 13:37:23.328644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.328657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.328993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.329005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.329333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.329348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.329551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.329564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.329872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.329886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.330235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.330249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.330618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.330631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.330978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.330992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.331337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.331350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.331707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.331719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.332114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.332126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.332479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.332492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.332829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.332841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.333178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.333191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.333537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.333551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.333911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.333923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.334103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.334119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.334473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.334488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.334830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.334844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.335164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.335176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.335362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.335376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.335725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.335739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.336082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.336094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.336415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.336429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.336619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.336634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.336961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.336975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.337158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.337172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.337492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.337507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.337835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.337848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.338196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.338209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.338548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.338562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.338735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.338747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.339136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.339150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.339468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.339481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.339824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.339837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.340155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.340170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.340479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.340494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.340872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.340887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.828 qpair failed and we were unable to recover it. 00:29:36.828 [2024-12-06 13:37:23.341229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-12-06 13:37:23.341242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.341564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.341577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.341767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.341782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.341983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.341995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.342158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.342172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.342573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.342588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.342916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.342932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.343275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.343289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.343632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.343646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.343833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.343846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.344176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.344190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.344527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.344540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.344888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.344901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.345206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.345220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.345524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.345538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.345880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.345894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.346219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.346233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.346586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.346602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.346925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.346943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.347272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.347288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.347640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.347655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.348004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.348019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.348373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.348387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.348606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.348620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.348910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.348924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.349238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.349252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.349583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.349596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.349885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.349898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.350244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.350261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.350467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.350483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.350806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.350820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.351168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.351183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.352199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.352237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.352559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.352573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.352922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.352936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.353267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.353281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.353629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.353644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.353956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.353971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.829 [2024-12-06 13:37:23.354310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-12-06 13:37:23.354324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.829 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.354679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.354694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.355037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.355051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.355372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.355388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.355723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.355737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.356076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.356092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.356321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.356336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.356648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.356662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.357015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.357029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.357423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.357436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.357641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.357655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.357979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.357993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.358301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.358315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.358630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.358644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.358992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.359007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.359319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.359336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.359681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.359695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.360022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.360037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.360380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.360394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.360720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.360735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.361079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.361099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.361480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.361495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.361836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.361848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.362161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.362176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.362423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.362440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.362533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.362546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.362909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.362925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.363264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.363279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.363630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.363645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.363954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.363968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.364298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.364311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.364632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.364644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.364969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.364983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.365368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.365384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.365730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.365747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.366068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.366083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.366399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.366412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.366754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-12-06 13:37:23.366770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.830 qpair failed and we were unable to recover it. 00:29:36.830 [2024-12-06 13:37:23.367081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.367096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.367324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.367338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.367649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.367665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.368016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.368030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.368766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.368799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.369134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.369150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.369371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.369387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.369710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.369724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.370051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.370068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.370410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.370425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.370759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.370776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.371125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.371143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.371498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.371524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.371868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.371882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.372251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.372268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.372579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.372596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.372941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.372956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.373292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.373307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.373542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.373558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.373924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.373938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.374261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.374274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.374616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.374631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.374843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.374861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.375192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.375206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.375417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.375431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.375773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.375787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.376132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.376146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.376508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.376524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.376888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.376905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.377249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.377265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.377590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.377604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.377925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.377940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.378291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.378304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.378634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.378649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.378974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.378988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.379316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.379332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.379735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.831 [2024-12-06 13:37:23.379748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.831 qpair failed and we were unable to recover it. 00:29:36.831 [2024-12-06 13:37:23.380089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.380105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.380434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.380448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.380818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.380832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.381028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.381042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.381378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.381392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.381743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.381758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.382084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.382098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.382442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.382465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.382790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.382806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.383123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.383138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.383480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.383496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.383836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.383849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.384192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.384209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.384527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.384544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.384898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.384914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.385122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.385137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.385480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.385494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.385844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.385858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.386072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.386088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.386310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.386322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.386636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.386649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.386974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.386989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.387300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.387316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.387683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.387699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.388042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.388059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.388377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.388395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.388707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.388724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.389072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.389089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.389423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.389439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.389812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.389829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.390187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.390204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.390411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.390429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.390802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.390817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.391156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.391171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.391487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.391503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.391838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.391855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.392183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.392199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.392516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.392534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.392891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.392907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.393269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.393285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.393626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.832 [2024-12-06 13:37:23.393641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.832 qpair failed and we were unable to recover it. 00:29:36.832 [2024-12-06 13:37:23.393978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.393995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.394309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.394324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.394646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.394661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.395007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.395022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.395086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.395101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.395320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.395336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.395679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.395696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.396037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.396053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.396384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.396399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.396716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.396731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.397051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.397066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.397403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.397417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.397770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.397786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.398022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.398036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.398351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.398367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.398809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.398825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.399154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.399169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.399486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.399501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.399838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.399852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.400073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.400088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.400432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.400446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.400772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.400789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.401107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.401123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.401470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.401487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.401876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.401895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.402236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.402251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.402463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.402478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.833 qpair failed and we were unable to recover it. 00:29:36.833 [2024-12-06 13:37:23.402677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.833 [2024-12-06 13:37:23.402692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.403037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.403051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.403397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.403412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.403776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.403792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.404125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.404140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.404532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.404547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.404896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.404910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.405263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.405278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.405592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.405606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.405957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.405973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.406315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.406329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.406542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.406557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.406847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.406859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.407186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.407199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.407524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.407543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.407848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.407861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.408179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.408193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.408550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.408563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.408748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.408762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.409091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.409105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.409408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.409421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.409757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.409772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.410127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.410141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.410528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.410543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.410894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.410907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.411233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.411247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.411568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.411582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.411934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.411948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.412164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.412177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.412382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.412395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.412683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.412700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.413026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.413040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.413359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.413375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.413721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.413736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.413993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.414007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.414188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.414201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.834 qpair failed and we were unable to recover it. 00:29:36.834 [2024-12-06 13:37:23.414590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.834 [2024-12-06 13:37:23.414605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.835 qpair failed and we were unable to recover it. 00:29:36.835 [2024-12-06 13:37:23.414981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.835 [2024-12-06 13:37:23.415000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:36.835 qpair failed and we were unable to recover it. 00:29:37.120 [2024-12-06 13:37:23.415317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.120 [2024-12-06 13:37:23.415332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.120 qpair failed and we were unable to recover it. 00:29:37.120 [2024-12-06 13:37:23.415632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.120 [2024-12-06 13:37:23.415647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.120 qpair failed and we were unable to recover it. 00:29:37.120 [2024-12-06 13:37:23.416008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.120 [2024-12-06 13:37:23.416025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.120 qpair failed and we were unable to recover it. 00:29:37.120 [2024-12-06 13:37:23.416366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.120 [2024-12-06 13:37:23.416381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.120 qpair failed and we were unable to recover it. 00:29:37.120 [2024-12-06 13:37:23.416696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.120 [2024-12-06 13:37:23.416709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.120 qpair failed and we were unable to recover it. 00:29:37.120 [2024-12-06 13:37:23.417026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.120 [2024-12-06 13:37:23.417039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.120 qpair failed and we were unable to recover it. 00:29:37.120 [2024-12-06 13:37:23.417384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.120 [2024-12-06 13:37:23.417398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.120 qpair failed and we were unable to recover it. 00:29:37.120 [2024-12-06 13:37:23.417744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.120 [2024-12-06 13:37:23.417758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.120 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.418076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.418091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.418414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.418427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.418779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.418795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.419150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.419164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.419518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.419532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.419886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.419900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.420245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.420259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.420605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.420618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.420936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.420948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.421299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.421312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.421647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.421664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.422032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.422044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.422353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.422367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.422699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.422712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.423099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.423113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.423336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.423350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.423680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.423695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.424047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.424061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.424366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.424380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.424587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.424601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.424907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.424921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.425141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.425155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.425469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.425483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.425700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.425714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.426044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.426059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.426403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.426417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.426732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.426747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.427141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.427155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.427504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.427518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.427858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.427870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.428191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.428206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.428624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.428639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.428965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.428979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.429321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.429334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.429665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.429680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.430026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.430039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.430350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.121 [2024-12-06 13:37:23.430362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.121 qpair failed and we were unable to recover it. 00:29:37.121 [2024-12-06 13:37:23.430676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.430690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.431042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.431056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.431398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.431412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.431767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.431781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.432122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.432136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.432323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.432336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.432692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.432705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.433053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.433068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.433398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.433413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.433472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.433485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.433772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.433786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.434101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.434113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.434476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.434490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.434729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.434743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.435053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.435065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.435407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.435420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.435740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.435756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.435971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.435984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.436309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.436322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.436567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.436581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.436801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.436814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.437131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.437143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.437501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.437516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.437829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.437842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.438193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.438207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.438529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.438543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.438905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.438918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.439244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.439258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.439619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.439634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.439943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.439956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.440308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.440322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.440671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.440684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.440960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.440974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.441325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.441340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.441672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.441689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.442021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.442036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.442347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.442361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.122 [2024-12-06 13:37:23.442679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.122 [2024-12-06 13:37:23.442694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.122 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.443048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.443063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.443368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.443382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.443699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.443714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.444062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.444078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.444425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.444440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.444752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.444767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.445019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.445034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.445398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.445412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.445671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.445685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.445871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.445885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.446202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.446216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.446538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.446552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.446911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.446925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.447267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.447280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.447597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.447610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.447962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.447975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.448328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.448341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.448688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.448701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.449016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.449030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.449217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.449232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.449627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.449642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.449975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.449989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.450302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.450315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.450644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.450659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.451003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.451017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.451216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.451229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.451536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.451551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.451761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.451773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.452063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.452077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.452406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.452420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.452769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.452783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.453132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.453147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.453497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.453510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.453836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.453850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.454170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.454182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.454522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.454536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.454922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.454938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.123 [2024-12-06 13:37:23.455108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.123 [2024-12-06 13:37:23.455121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.123 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.455441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.455463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.455808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.455821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.456167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.456181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.456367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.456382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.456703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.456718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.457048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.457063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.457382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.457397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.457715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.457729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.458041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.458056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.458397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.458411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.458769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.458785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.458966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.458981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.459295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.459310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.459656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.459670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.460000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.460014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.460327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.460340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.460552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.460564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.460906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.460920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.461253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.461267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.461592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.461605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.461911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.461926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.462270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.462284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.462639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.462652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.462855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.462868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.463183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.463196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.463369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.463381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.463721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.463735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.464083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.464098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.464418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.464431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.464790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.464804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.465131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.465144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.465343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.465356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.465739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.465753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.466064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.466076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.466437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.466450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.466651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.466663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.467004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.467017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.124 [2024-12-06 13:37:23.467356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.124 [2024-12-06 13:37:23.467371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.124 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.467669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.467685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.468013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.468026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.468351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.468365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.468706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.468720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.469064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.469078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.469421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.469435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.469634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.469648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.470024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.470039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.470388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.470402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.470750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.470764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.471078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.471093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.471443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.471462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.471801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.471815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.472155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.472170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.472376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.472390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.472707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.472722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.473069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.473083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.473281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.473299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.473640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.473654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.474007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.474024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.474370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.474383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.474714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.474728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.475073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.475086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.475316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.475329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.475680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.475695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.475906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.475920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.476095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.476107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.476299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.476313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.476628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.476642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.476988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.477003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.477319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.477334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.125 [2024-12-06 13:37:23.477685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.125 [2024-12-06 13:37:23.477699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.125 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.478023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.478039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.478373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.478387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.478710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.478724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.479082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.479095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.479302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.479316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.479506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.479520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.479848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.479862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.480220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.480233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.480576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.480592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.480935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.480948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.481270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.481284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.481637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.481651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.481970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.481982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.482332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.482345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.482676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.482691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.482994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.483006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.483345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.483360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.483699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.483713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.484027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.484041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.484388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.484402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.484746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.484761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.485040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.485055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.485371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.485385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.485721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.485735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.486079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.486094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.486433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.486450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.486803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.486819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.487165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.487180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.487528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.487541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.487784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.487796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.488140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.488153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.488514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.488531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.488881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.488895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.489240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.489256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.489574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.489588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.489934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.489949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.490308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.490323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.126 [2024-12-06 13:37:23.490631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.126 [2024-12-06 13:37:23.490644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.126 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.490958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.490972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.491289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.491303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.491659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.491673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.492001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.492013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.492378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.492392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.492705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.492718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.493066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.493079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.493414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.493429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.493743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.493757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.494111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.494126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.494470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.494486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.494824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.494839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.495151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.495164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.495510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.495523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.495876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.495889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.496230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.496245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.496586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.496600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.496952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.496965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.497292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.497305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.497517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.497529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.497864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.497878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.498204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.498219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.498545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.498558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.498908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.498923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.499260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.499275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.499607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.499624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.499967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.499980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.500316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.500331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.500672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.500686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.501041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.501055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.501406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.501419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.501741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.501753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.502110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.502123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.502473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.502489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.502823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.502836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.503192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.503205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.503528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.503542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.127 [2024-12-06 13:37:23.503870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.127 [2024-12-06 13:37:23.503886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.127 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.504236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.504251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.504577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.504592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.504912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.504927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.505287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.505299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.505657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.505673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.505999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.506012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.506318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.506333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.506687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.506701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.507056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.507071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.507446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.507465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.507667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.507681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.508024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.508036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.508382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.508395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.508611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.508625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.508986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.509001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.509190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.509204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.509403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.509418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.509747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.509763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.510133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.510147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.510492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.510506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.510831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.510846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.511186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.511201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.511508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.511524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.511855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.511870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.512189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.512204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.512545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.512561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.512901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.512915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.513271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.513285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.513650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.513663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.513869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.513882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.514204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.514219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.514521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.514534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.514846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.514860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.515193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.515207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.515525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.515537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.515884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.515897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.516249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.516262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.516593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.128 [2024-12-06 13:37:23.516605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.128 qpair failed and we were unable to recover it. 00:29:37.128 [2024-12-06 13:37:23.516964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.516977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.517349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.517365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.517673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.517687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.517882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.517896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.518240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.518253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.518658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.518672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.518984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.518996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.519308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.519323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.519704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.519717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.520033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.520045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.520260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.520273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.520612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.520627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.520941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.520954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.521313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.521326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.521675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.521688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.522034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.522046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.522365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.522378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.522711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.522724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.522949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.522960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.523302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.523316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.523675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.523688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.523950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.523962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.524308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.524321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.524629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.524642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.524839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.524852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.525178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.525192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.525537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.525550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.525894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.525907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.526225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.526240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.526593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.526606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.526949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.526964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.527298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.527312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.129 [2024-12-06 13:37:23.527664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.129 [2024-12-06 13:37:23.527677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.129 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.527868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.527881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.528205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.528220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.528562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.528576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.528898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.528910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.529252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.529265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.529616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.529631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.529981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.529994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.530312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.530326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.530631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.530648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.530962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.530975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.531322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.531336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.531684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.531698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.532053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.532068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.532436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.532450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.532640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.532654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.532956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.532970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.533300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.533313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.533661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.533676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.534017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.534031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.534344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.534358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.534633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.534646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.534956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.534970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.535313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.535326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.535714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.535728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.536031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.536044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.536281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.536294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.536488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.536501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.536840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.536853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.537198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.537212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.537563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.537577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.537926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.537941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.538260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.538273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.538638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.538652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.539004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.539018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.539170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.539182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.539528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.539543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.539875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.539888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.130 [2024-12-06 13:37:23.540238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.130 [2024-12-06 13:37:23.540252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.130 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.540490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.540504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.540826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.540839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.541194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.541208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.541557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.541570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.541913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.541928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.542206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.542220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.542399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.542414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.542747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.542761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.543085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.543099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.543440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.543460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.543793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.543809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.544169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.544183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.544498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.544512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.544837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.544850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.545191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.545205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.545519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.545533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.545858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.545874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.546221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.546235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.546573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.546589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.546937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.546952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.547295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.547310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.547508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.547521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.547856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.547872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.548223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.548236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.548570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.548585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.548938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.548954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.549315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.549329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.549676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.549692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.550032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.550045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.550419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.550433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.550746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.550761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.550999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.551014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.551214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.551227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.551556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.551571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.551916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.551931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.552280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.552295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.552611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.552624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.131 [2024-12-06 13:37:23.552962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.131 [2024-12-06 13:37:23.552975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.131 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.553162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.553174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.553493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.553507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.553838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.553851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.554169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.554181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.554390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.554403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.554711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.554725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.555105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.555119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.555305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.555319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.555623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.555637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.555952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.555966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.556163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.556176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.556473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.556486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.556828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.556844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.557032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.557045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.557329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.557342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.557675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.557689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.558045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.558060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.558412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.558427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.558775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.558788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.559134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.559149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.559492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.559508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.559842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.559856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.560173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.560188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.560590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.560603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.560945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.560960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.561277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.561290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.561639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.561653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.561973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.561986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.562305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.562319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.562680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.562694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.563024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.563038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.563347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.563360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.563721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.563736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.564057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.564070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.564404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.564419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.564737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.564750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.564926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.564938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.132 qpair failed and we were unable to recover it. 00:29:37.132 [2024-12-06 13:37:23.565249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.132 [2024-12-06 13:37:23.565261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.565418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.565429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.565788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.565802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.566133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.566147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.566486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.566501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.566838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.566852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.567029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.567042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.567249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.567261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.567590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.567604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.567877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.567890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.568220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.568233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.568552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.568565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.568920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.568933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.569290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.569303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.569512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.569526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.569834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.569851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.570170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.570182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.570514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.570528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.570830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.570842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.571153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.571166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.571553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.571567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.571881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.571894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.572240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.572252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.572641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.572655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.572974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.572986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.573330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.573344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.573515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.573529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.573785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.573799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.574106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.574119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.574466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.574482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.574800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.574813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.575123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.575138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.575334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.575347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.575674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.575686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.576002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.576015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.576329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.576345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.576470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.576483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.576825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.576838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.133 [2024-12-06 13:37:23.577164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.133 [2024-12-06 13:37:23.577178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.133 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.577574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.577587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.577910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.577922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.578246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.578259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.578557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.578571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.578899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.578912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.579110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.579123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.579468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.579482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.579803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.579817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.580169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.580182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.580528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.580543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.580736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.580751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.581096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.581110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.581431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.581445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.581683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.581697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.581923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.581938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.582238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.582253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.582612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.582628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.582956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.582970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.583327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.583340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.583671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.583687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.584030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.584043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.584357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.584371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.584669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.584683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.585045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.585059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.585366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.585380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.585727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.585742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.586074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.586087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.586410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.586425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.586643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.586656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.587006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.587021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.587369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.587383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.587609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.587624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.587944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.134 [2024-12-06 13:37:23.587959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.134 qpair failed and we were unable to recover it. 00:29:37.134 [2024-12-06 13:37:23.588301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.588315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.588684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.588698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.589022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.589037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.589389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.589402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.589756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.589770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.590108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.590122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.590326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.590340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.590719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.590733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.591058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.591071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.591416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.591431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.591752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.591766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.591950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.591963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.592313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.592327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.592685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.592700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.593055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.593068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.593390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.593406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.593759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.593775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.594086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.594100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.594432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.594447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.594790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.594804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.595153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.595169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.595504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.595522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.595720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.595734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.596071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.596088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.596472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.596486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.596837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.596852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.597176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.597192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.597551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.597564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.597919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.597935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.598282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.598297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.598522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.598534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.598857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.598870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.599214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.599230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.599547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.599563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.599923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.599939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.600280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.600298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.600651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.600666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.135 [2024-12-06 13:37:23.601033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.135 [2024-12-06 13:37:23.601047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.135 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.601391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.601403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.601754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.601769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.601980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.601993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.602328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.602342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.602680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.602695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.603015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.603031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.603290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.603306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.603522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.603537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.603868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.603882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.604199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.604215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.604568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.604582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.604937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.604950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.605300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.605315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.605662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.605677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.605983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.605996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.606337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.606351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.606689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.606704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.607046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.607063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.607423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.607438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.607788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.607803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.608120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.608135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.608518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.608532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.608886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.608899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.609228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.609245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.609573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.609588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.609911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.609929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.610156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.610170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.610518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.610532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.610877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.610891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.611249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.611264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.611609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.611622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.611946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.611961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.612277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.612292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.612536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.612551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.612893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.612906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.613248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.613262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.613576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.613590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.136 [2024-12-06 13:37:23.613943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.136 [2024-12-06 13:37:23.613956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.136 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.614300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.614315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.614662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.614676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.614988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.615001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.615346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.615360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.615683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.615697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.616063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.616078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.616410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.616425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.616776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.616789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.617144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.617159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.617510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.617524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.617846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.617859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.618193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.618206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.618562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.618577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.618913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.618927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.619239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.619254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.619617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.619633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.619696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.619706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.619875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.619888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.620208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.620223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.620291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.620304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.620511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.620525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.620831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.620844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.621193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.621209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.621532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.621549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.621889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.621903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.622242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.622256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.622576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.622591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.622931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.622947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.623283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.623298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.623630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.623644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.623983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.623997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.624325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.624339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.624685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.624700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.625057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.625073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.625478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.625494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.625833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.625846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.626158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.626172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.626526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.137 [2024-12-06 13:37:23.626539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.137 qpair failed and we were unable to recover it. 00:29:37.137 [2024-12-06 13:37:23.626888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.626904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.627256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.627272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.627509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.627522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.627861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.627874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.628227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.628240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.628594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.628609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.628845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.628860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.629166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.629178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.629505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.629519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.629846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.629858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.630171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.630184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.630545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.630560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.630901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.630914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.631292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.631306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.631654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.631671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.632013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.632029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.632385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.632400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.632746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.632763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.633085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.633098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.633447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.633470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.633788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.633802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.634111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.634125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.634443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.634465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.634824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.634839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.635050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.635062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.635265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.635278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.635642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.635656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.635982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.635997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.636349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.636365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.636713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.636732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.637062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.637076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.637422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.637436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.637611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.637627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.637827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.637839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.638179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.638194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.638531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.638549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.638903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.638916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.138 [2024-12-06 13:37:23.639244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.138 [2024-12-06 13:37:23.639259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.138 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.639578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.639592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.639818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.639830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.640010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.640024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.640363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.640377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.640692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.640708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.641048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.641065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.641411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.641424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.641775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.641790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.642107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.642121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.642474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.642489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.642804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.642820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.643150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.643164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.643517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.643531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.643899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.643913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.644235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.644249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.644601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.644615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.644933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.644948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.645296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.645311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.645448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b47e10 is same with the state(6) to be set 00:29:37.139 [2024-12-06 13:37:23.646127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.646253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a44000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.646755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.646802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a44000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.647219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.647256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a44000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.647728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.647796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.648158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.648175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.648524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.648539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.648884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.648896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.649196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.649210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.649565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.649581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.649973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.649987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.650206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.650219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.650565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.650581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.650936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.650949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.651293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.651306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.139 qpair failed and we were unable to recover it. 00:29:37.139 [2024-12-06 13:37:23.651645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.139 [2024-12-06 13:37:23.651662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.651841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.651854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.652171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.652187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.652526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.652544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.652887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.652900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.653245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.653261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.653611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.653625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.653944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.653960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.654316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.654330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.654673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.654688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.655035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.655050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.655368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.655382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.655726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.655745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.656090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.656106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.656436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.656450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.656774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.656787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.656967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.656981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.657302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.657316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.657664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.657679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.657874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.657888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.658090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.658102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.658433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.658449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.658795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.658809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.659165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.659179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.659528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.659541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.659746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.659761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.660101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.660116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.660473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.660490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.660829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.660842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.661169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.661185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.661509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.661523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.661868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.661882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.662094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.662107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.662370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.662383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.662725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.662739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.663104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.663119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.663321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.663335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.663653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.663667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.140 [2024-12-06 13:37:23.663994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.140 [2024-12-06 13:37:23.664009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.140 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.664330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.664345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.664699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.664713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.665058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.665072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.665419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.665434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.665764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.665779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.666091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.666105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.666423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.666436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.666755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.666768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.667081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.667094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.667447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.667470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.667779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.667793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.668005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.668019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.668354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.668368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.668723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.668742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.669127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.669140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.669446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.669467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.669792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.669805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.670119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.670133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.670496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.670510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.670843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.670857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.671178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.671193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.671539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.671553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.671785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.671798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.672114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.672126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.672467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.672480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.672832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.672846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.673158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.673172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.673529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.673543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.673895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.673907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.674229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.674243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.674596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.674612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.674808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.674822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.675027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.675040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.675349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.675362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.675552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.675565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.675773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.675786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.676173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.676188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.141 [2024-12-06 13:37:23.676535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.141 [2024-12-06 13:37:23.676550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.141 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.676891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.676905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.677250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.677262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.677614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.677628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.677966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.677981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.678333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.678347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.678700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.678714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.678940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.678953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.679296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.679310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.679662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.679676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.680013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.680028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.680408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.680422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.680731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.680746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.681066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.681081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.681435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.681450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.681795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.681809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.682120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.682138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.682487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.682502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.682849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.682863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.683183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.683196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.683557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.683571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.683763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.683775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.684094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.684106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.684474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.684489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.684818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.684832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.685135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.685147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.685355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.685369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.685678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.685691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.686017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.686030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.686232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.686244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.686542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.686556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.686911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.686923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.687275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.687287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.687608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.687621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.687956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.687970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.688153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.688166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.688515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.688529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.688884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.688896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.142 [2024-12-06 13:37:23.689216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.142 [2024-12-06 13:37:23.689229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.142 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.689572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.689587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.689910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.689923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.690275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.690290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.690629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.690643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.691051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.691065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.691342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.691355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.691512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.691527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.691874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.691887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.692237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.692253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.692609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.692622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.692975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.692989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.693298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.693312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.693631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.693644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.693974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.693987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.694259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.694274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.694500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.694515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.694854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.694866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.695210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.695227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.695581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.695595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.695956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.695969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.696286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.696301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.696631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.696644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.696990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.697006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.697358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.697372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.697592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.697604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.697777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.697789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.698098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.698111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.698470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.698484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.698846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.698858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.699201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.699215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.699568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.699583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.699909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.699923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.700276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.700291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.143 [2024-12-06 13:37:23.700628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.143 [2024-12-06 13:37:23.700642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.143 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.700952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.700964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.701166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.701182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.701471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.701486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.701895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.701908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.702209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.702221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.702568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.702582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.702899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.702911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.703257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.703273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.703622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.703636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.703831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.703844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.704186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.704200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.704544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.704557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.704879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.704891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.705229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.705244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.705477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.705491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.705840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.705854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.706180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.706195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.706538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.706551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.706882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.706894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.707240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.707253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.707632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.707645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.707830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.707843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.708217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.708231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.708580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.708598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.708918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.708931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.709259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.709274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.709654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.709667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.710011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.710025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.710375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.710389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.710711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.710724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.711066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.711078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.711429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.711442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.711759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.711773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.712114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.712128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.712467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.712480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.712805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.712818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.713127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.713142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.713492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.144 [2024-12-06 13:37:23.713506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.144 qpair failed and we were unable to recover it. 00:29:37.144 [2024-12-06 13:37:23.713853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.713866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.714218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.714231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.714581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.714594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.714941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.714954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.715308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.715321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.715670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.715685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.716031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.716043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.716379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.716393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.716586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.716601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.716937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.716949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.717291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.717306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.717655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.717668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.717978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.717993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.718325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.718338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.718676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.718691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.719029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.719041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.719401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.719416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.719635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.719649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.719942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.719955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.720284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.720298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.720489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.720502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.720696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.720711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.721054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.721066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.721411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.721425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.721736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.721750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.722082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.722097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.722441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.722463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.722803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.722817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.723132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.723146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.723327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.723340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.723538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.723551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.723884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.723899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.724239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.724251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.724571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.724585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.724798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.724811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.725142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.725154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.725488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.145 [2024-12-06 13:37:23.725501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.145 qpair failed and we were unable to recover it. 00:29:37.145 [2024-12-06 13:37:23.725829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.725843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.726197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.726211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.726562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.726576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.726925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.726938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.727279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.727293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.727631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.727645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.727963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.727974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.728335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.728349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.728700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.728714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.729031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.729044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.729389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.729403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.729717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.729730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.730060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.730075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.730279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.730300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.730487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.730502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.730875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.730889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.731239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.731252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.731599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.731613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.731818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.731830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.732157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.732170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.732512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.732526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.732867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.732881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.733272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.733286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.733630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.733646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.734028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.734041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.734352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.734364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.734730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.734744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.735103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.735119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.735321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.735339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.735680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.735693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.736020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.736034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.736395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.736409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.736763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.736778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.737124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.737140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.737371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.737386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.737736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.737751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.738094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.738108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.146 [2024-12-06 13:37:23.738466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.146 [2024-12-06 13:37:23.738480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.146 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.738818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.738831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.739220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.739235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.739543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.739555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.739905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.739918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.740263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.740277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.740622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.740636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.740976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.740991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.741194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.741207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.741551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.741565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.741903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.741917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.742247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.742262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.742448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.742469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.742859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.742873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.743272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.743285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.743601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.743614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.743949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.743964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.744303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.744318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.744672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.744686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.745027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.745042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.745392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.745405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.745737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.745752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.746091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.746105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.746430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.746443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.746654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.746667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.746985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.746997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.747342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.747355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.747750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.747764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.748082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.748094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.748443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.748462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.748801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.748816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.749147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.749162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.749492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.749507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.749806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.749819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.750161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.750173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.750525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.750539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.750920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.750937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.751248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.751261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.147 [2024-12-06 13:37:23.751652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.147 [2024-12-06 13:37:23.751666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.147 qpair failed and we were unable to recover it. 00:29:37.148 [2024-12-06 13:37:23.751966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.148 [2024-12-06 13:37:23.751979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.148 qpair failed and we were unable to recover it. 00:29:37.148 [2024-12-06 13:37:23.752320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.148 [2024-12-06 13:37:23.752333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.148 qpair failed and we were unable to recover it. 00:29:37.148 [2024-12-06 13:37:23.752700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.148 [2024-12-06 13:37:23.752715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.148 qpair failed and we were unable to recover it. 00:29:37.148 [2024-12-06 13:37:23.753062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.148 [2024-12-06 13:37:23.753077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.148 qpair failed and we were unable to recover it. 00:29:37.148 [2024-12-06 13:37:23.753420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.148 [2024-12-06 13:37:23.753434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.148 qpair failed and we were unable to recover it. 00:29:37.424 [2024-12-06 13:37:23.753760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.424 [2024-12-06 13:37:23.753776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.424 qpair failed and we were unable to recover it. 00:29:37.424 [2024-12-06 13:37:23.754080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.424 [2024-12-06 13:37:23.754098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.424 qpair failed and we were unable to recover it. 00:29:37.424 [2024-12-06 13:37:23.754405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.424 [2024-12-06 13:37:23.754418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.424 qpair failed and we were unable to recover it. 00:29:37.424 [2024-12-06 13:37:23.754745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.424 [2024-12-06 13:37:23.754758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.424 qpair failed and we were unable to recover it. 00:29:37.424 [2024-12-06 13:37:23.755128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.424 [2024-12-06 13:37:23.755143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.424 qpair failed and we were unable to recover it. 00:29:37.424 [2024-12-06 13:37:23.755529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.424 [2024-12-06 13:37:23.755542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.424 qpair failed and we were unable to recover it. 00:29:37.424 [2024-12-06 13:37:23.755890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.424 [2024-12-06 13:37:23.755902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.424 qpair failed and we were unable to recover it. 00:29:37.424 [2024-12-06 13:37:23.756261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.424 [2024-12-06 13:37:23.756275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.424 qpair failed and we were unable to recover it. 00:29:37.424 [2024-12-06 13:37:23.756510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.424 [2024-12-06 13:37:23.756523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.424 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.756858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.756871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.757220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.757236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.757576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.757591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.757913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.757926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.758285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.758300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.758630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.758643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.758989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.759003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.759347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.759361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.759710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.759723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.760051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.760064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.760442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.760461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.760657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.760670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.760850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.760861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.761224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.761238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.761565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.761578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.761922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.761935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.762283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.762299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.762630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.762644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.762881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.762892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.763119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.763133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.763470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.763485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.763861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.763874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.764224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.764240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.764588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.764602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.764925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.764937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.765259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.765272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.765605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.765621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.765959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.765971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.766321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.766336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.766699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.766714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.767044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.767059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.767399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.767412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.767763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.767777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.768119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.768132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.768481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.768495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.768842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.768857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.769194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.425 [2024-12-06 13:37:23.769209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.425 qpair failed and we were unable to recover it. 00:29:37.425 [2024-12-06 13:37:23.769569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.769583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.769935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.769950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.770288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.770300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.770616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.770629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.770977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.770991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.771333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.771347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.771701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.771715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.771904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.771918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.772119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.772135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.772459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.772472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.772817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.772832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.773178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.773193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.773555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.773570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.773758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.773773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.774110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.774124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.774475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.774490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.774830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.774844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.775188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.775203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.775527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.775540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.775883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.775897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.776240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.776253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.776485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.776497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.776670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.776683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.776890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.776903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.777215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.777230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.777539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.777552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.777876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.777889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.778248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.778263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.778600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.778614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.778951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.778965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.779322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.779336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.779543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.779556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.779887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.779899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.780281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.780294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.780622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.780635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.780818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.780831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.781178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.781191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.781522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.426 [2024-12-06 13:37:23.781537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.426 qpair failed and we were unable to recover it. 00:29:37.426 [2024-12-06 13:37:23.781864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.781878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.782226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.782242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.782615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.782628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.782963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.782977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.783328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.783342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.783689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.783703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.784034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.784049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.784411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.784426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.784763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.784776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.785118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.785132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.785464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.785482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.785806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.785820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.786162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.786176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.786521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.786537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.786878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.786893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.787235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.787249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.787592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.787606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.787918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.787930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.788276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.788289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.788642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.788656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.788840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.788856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.789200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.789214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.789560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.789574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.789923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.789936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.790285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.790300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.790649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.790663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.790987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.791001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.791319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.791334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.791519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.791533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.791873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.791887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.792069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.792083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.792470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.792485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.792774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.792789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.792971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.792985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.793310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.793326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.793671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.793685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.794034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.794048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.794403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.794416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.427 qpair failed and we were unable to recover it. 00:29:37.427 [2024-12-06 13:37:23.794726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.427 [2024-12-06 13:37:23.794739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.795067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.795080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.795132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.795142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.795473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.795486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.795839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.795853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.796200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.796214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.796603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.796617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.796953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.796967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.797312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.797325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.797681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.797696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.798010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.798024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.798352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.798367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.798688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.798705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.799044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.799059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.799409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.799422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.799745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.799758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.800092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.800106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.800450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.800470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.800860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.800875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.801214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.801227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.801573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.801588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.801904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.801917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.802295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.802311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.802499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.802513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.802846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.802859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.803187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.803201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.803551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.803564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.803782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.803793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.804135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.804148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.804344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.804357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.804674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.804687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.805021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.805038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.805381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.805395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.805715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.805729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.805911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.805925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.806209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.806221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.806411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.806425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.806635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.428 [2024-12-06 13:37:23.806649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.428 qpair failed and we were unable to recover it. 00:29:37.428 [2024-12-06 13:37:23.806983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.806996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.807348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.807363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.807684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.807698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.808043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.808055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.808392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.808407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.808718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.808732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.809066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.809081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.809412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.809426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.809763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.809777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.809980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.809995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.810312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.810326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.810572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.810586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.810787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.810800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.811145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.811159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.811503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.811521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.811881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.811894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.812114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.812126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.812479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.812492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.812869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.812882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.813188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.813201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.813558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.813572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.813797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.813809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.814165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.814177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.814570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.814584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.814786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.814798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.815187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.815201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.815530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.815543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.815875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.815888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.816255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.816270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.816599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.816613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.816952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.429 [2024-12-06 13:37:23.816967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.429 qpair failed and we were unable to recover it. 00:29:37.429 [2024-12-06 13:37:23.817300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.817312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.817632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.817645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.817991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.818003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.818356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.818371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.818708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.818722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.819067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.819081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.819431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.819444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.819798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.819813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.820164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.820179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.820530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.820543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.820731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.820744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.821084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.821098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.821288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.821303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.821481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.821497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.821877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.821890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.822247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.822261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.822607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.822620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.822943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.822959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.823311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.823323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.823678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.823693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.824012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.824025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.824377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.824390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.824782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.824796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.824988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.825004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.825311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.825326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.825726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.825740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.826051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.826063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.826242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.826256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.826600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.826613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.826957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.826969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.827284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.827297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.827658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.827673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.827897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.827910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.828285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.828300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.828659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.828672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.828993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.829008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.829355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.829369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.829724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.430 [2024-12-06 13:37:23.829739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.430 qpair failed and we were unable to recover it. 00:29:37.430 [2024-12-06 13:37:23.830095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.830108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.830471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.830484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.830813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.830825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.831187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.831201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.831541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.831554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.831910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.831924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.832270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.832283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.832676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.832689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.833013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.833026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.833371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.833384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.833701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.833713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.834059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.834073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.834305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.834317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.834665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.834678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.835000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.835014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.835344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.835358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.835682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.835696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.836042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.836057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.836395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.836409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.836769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.836784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.837165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.837179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.837500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.837515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.837712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.837724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.838065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.838077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.838419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.838433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.838779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.838797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.839130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.839144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.839496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.839512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.839710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.839725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.840044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.840057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.840393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.840408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.840753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.840770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.841114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.841127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.841477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.841493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.841830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.841843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.842049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.842066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.842269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.842285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.842652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.431 [2024-12-06 13:37:23.842668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.431 qpair failed and we were unable to recover it. 00:29:37.431 [2024-12-06 13:37:23.842998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.843014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.843216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.843231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.843523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.843539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.843871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.843885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.844222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.844237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.844572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.844588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.844933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.844948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.845309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.845325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.845735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.845752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.846088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.846103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.846428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.846443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.846798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.846815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.847153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.847169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.847384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.847398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.847612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.847629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.847843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.847859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.848183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.848198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.848537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.848553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.848885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.848901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.849235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.849250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.849444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.849465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.849821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.849836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.850036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.850050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.850414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.850429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.850746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.850762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.851149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.851163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.851516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.851531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.851874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.851893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.852233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.852248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.852453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.852478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.852874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.852890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.853237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.853251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.853607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.853621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.853951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.853964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.854301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.854317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.854565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.854578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.854915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.854928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.855127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.432 [2024-12-06 13:37:23.855139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.432 qpair failed and we were unable to recover it. 00:29:37.432 [2024-12-06 13:37:23.855427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.855439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.855819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.855833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.856159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.856173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.856389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.856403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.856700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.856716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.857061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.857074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.857403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.857419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.857713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.857727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.858113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.858126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.858479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.858493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.858730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.858745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.859077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.859090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.859460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.859474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.859800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.859814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.860142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.860157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.860361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.860373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.860686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.860701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.861064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.861080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.861432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.861445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.861832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.861847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.862200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.862215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.862561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.862575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.862925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.862940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.863276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.863290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.863640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.863656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.864005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.864018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.864356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.864370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.864737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.864752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.865101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.865116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.865444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.865471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.865766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.865780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.866008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.866020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.866360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.433 [2024-12-06 13:37:23.866375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.433 qpair failed and we were unable to recover it. 00:29:37.433 [2024-12-06 13:37:23.866691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.866704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.867022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.867036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.867365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.867378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.867702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.867716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.868060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.868076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.868421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.868434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.868758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.868772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.869116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.869129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.869512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.869527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.869873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.869888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.870229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.870245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.870590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.870605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.870922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.870936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.871279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.871293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.871626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.871644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.871966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.871980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.872316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.872330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.872536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.872550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.872897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.872910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.873250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.873266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.873604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.873618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.873937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.873949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.874300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.874314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.874649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.874665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.875016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.875031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.875365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.875379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.875714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.875730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.875951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.875963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.876160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.876172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.876509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.876522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.876845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.876859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.877173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.877188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.877535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.877549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.877890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.877904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.878226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.878239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.878559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.878572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.878918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.878937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.434 [2024-12-06 13:37:23.879276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.434 [2024-12-06 13:37:23.879292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.434 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.879637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.879650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.879973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.879987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.880192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.880207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.880504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.880517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.880755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.880770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.881105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.881120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.881475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.881493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.881838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.881852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.882195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.882212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.882535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.882550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.882906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.882921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.883269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.883284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.883624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.883638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.883993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.884010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.884336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.884349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.884527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.884539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.884881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.884894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.885247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.885262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.885608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.885622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.885966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.885981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.886321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.886339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.886667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.886681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.887037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.887051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.887392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.887407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.887582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.887597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.887904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.887920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.888252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.888270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.888611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.888625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.888955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.888969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.889157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.889171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.889471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.889486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.889800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.889816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.890162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.890177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.890502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.890518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.890844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.890860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.891205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.891220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.891563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.891578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.435 qpair failed and we were unable to recover it. 00:29:37.435 [2024-12-06 13:37:23.891902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.435 [2024-12-06 13:37:23.891916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.892244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.892262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.892596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.892610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.892933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.892949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.893130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.893144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.893479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.893492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.893680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.893697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.893916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.893929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.894284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.894298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.894653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.894668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.894844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.894859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.895203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.895218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.895524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.895538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.895957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.895971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.896304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.896320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.896656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.896670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.897014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.897030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.897405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.897420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.897777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.897795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.898120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.898133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.898451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.898474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.898781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.898794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.899141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.899158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.899504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.899518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.899883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.899897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.900224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.900238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.900581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.900594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.900976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.900990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.901315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.901329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.901687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.901700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.902048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.902063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.902410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.902423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.902742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.902756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.903110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.903122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.903464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.903479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.903814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.903828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.904176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.904190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.904524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.904538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.436 qpair failed and we were unable to recover it. 00:29:37.436 [2024-12-06 13:37:23.904908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.436 [2024-12-06 13:37:23.904922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.905269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.905281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.905684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.905698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.905900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.905917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.906232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.906244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.906597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.906610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.906930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.906943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.907264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.907278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.907631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.907644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.907985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.908001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.908353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.908367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.908546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.908560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.908912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.908926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.909263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.909278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.909605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.909618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.909952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.909966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.910150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.910164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.910511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.910524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.910870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.910882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.911231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.911246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.911593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.911606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.911930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.911942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.912265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.912278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.912608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.912624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.912969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.912983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.913305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.913319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.913630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.913643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.913854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.913870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.914205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.914218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.914419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.914433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.914827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.914841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.915112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.915128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.915449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.915471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.915816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.915829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.916043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.916055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.916380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.916394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.916619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.916634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.916955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.916969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.437 qpair failed and we were unable to recover it. 00:29:37.437 [2024-12-06 13:37:23.917309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.437 [2024-12-06 13:37:23.917323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.917524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.917538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.917887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.917902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.918069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.918081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.918422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.918434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.918782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.918799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.918982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.918994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.919222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.919234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.919566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.919579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.919913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.919926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.920304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.920316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.920663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.920678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.920988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.921001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.921345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.921359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.921662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.921676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.922010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.922024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.922371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.922385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.922572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.922585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.922910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.922923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.923298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.923312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.923516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.923529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.923735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.923748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.923929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.923942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.924295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.924308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.924631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.924645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.924992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.925005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.925363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.925376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.925555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.925566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.925893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.925906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.926252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.926266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.926593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.926606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.926920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.926933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.927259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.927273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.438 [2024-12-06 13:37:23.927607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.438 [2024-12-06 13:37:23.927620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.438 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.927946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.927960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.928285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.928298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.928630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.928645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.928987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.929001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.929196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.929209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.929593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.929607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.929930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.929942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.930255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.930268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.930612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.930629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.930986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.930999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.931312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.931327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.931676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.931689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.932033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.932047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.932384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.932397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.932726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.932741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.933068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.933081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.933399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.933411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.933734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.933747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.934089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.934102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.934452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.934483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.934813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.934825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.935036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.935049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.935384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.935398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.935730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.935744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.936084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.936099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.936449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.936471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.936779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.936792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.937119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.937134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.937478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.937491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.937688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.937701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.938007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.938019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.938360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.938374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.938698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.938712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.939075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.939090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.939268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.939284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.939675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.939690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.940025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.940039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.940226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.439 [2024-12-06 13:37:23.940241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.439 qpair failed and we were unable to recover it. 00:29:37.439 [2024-12-06 13:37:23.940592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.940608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.940949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.940963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.941176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.941190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.941513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.941526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.941893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.941908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.942259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.942273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.942615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.942629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.942983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.942996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.943312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.943324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.943507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.943519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.943872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.943885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.944228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.944244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.944443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.944463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.944813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.944826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.945218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.945231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.945576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.945589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.945917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.945930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.946268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.946283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.946628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.946643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.946964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.946978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.947324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.947337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.947688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.947701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.947903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.947916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.948121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.948133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.948469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.948484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.948830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.948845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.949185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.949198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.949530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.949545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.949895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.949908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.950119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.950132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.950465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.950480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.950905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.950919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.951228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.951243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.951577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.951591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.951942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.951956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.952307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.952320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.952666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.952680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.953030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.440 [2024-12-06 13:37:23.953043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.440 qpair failed and we were unable to recover it. 00:29:37.440 [2024-12-06 13:37:23.953376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.953391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.953602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.953616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.953970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.953988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.954341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.954354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.954597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.954610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.954889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.954903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.955253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.955269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.955616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.955630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.955950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.955965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.956311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.956324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.956669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.956685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.957026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.957040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.957408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.957422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.957745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.957758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.957992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.958008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.958338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.958352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.958744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.958758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.959081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.959098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.959427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.959441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.959785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.959800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.960140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.960155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.960505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.960519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.960835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.960847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.961188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.961200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.961566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.961581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.961918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.961931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.962248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.962264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.962580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.962594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.962969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.962982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.963335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.963349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.963683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.963698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.964081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.964095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.964284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.964297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.964480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.964495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.964866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.964879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.965201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.965216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.965467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.965480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.965792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.441 [2024-12-06 13:37:23.965804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.441 qpair failed and we were unable to recover it. 00:29:37.441 [2024-12-06 13:37:23.966171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.966184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.966498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.966511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.966837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.966849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.967196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.967213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.967559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.967581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.967929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.967945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.968143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.968158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.968514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.968527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.968878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.968892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.969256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.969269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.969616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.969632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.969974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.969988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.970328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.970342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.970694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.970708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.970911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.970925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.971259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.971272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.971623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.971637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.971956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.971970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.972152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.972164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.972507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.972521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.972845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.972858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.973210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.973224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.973571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.973587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.973950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.973964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.974294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.974309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.974543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.974557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.974884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.974897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.975253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.975266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.975475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.975490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.975835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.975848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.976199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.976213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.976541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.976554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.976851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.976863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.977207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.977220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.977559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.977572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.977933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.977945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.978280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.978296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.442 [2024-12-06 13:37:23.978637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.442 [2024-12-06 13:37:23.978651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.442 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.978994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.979009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.979240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.979252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.979526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.979539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.979930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.979943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.980292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.980308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.980653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.980667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.981010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.981028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.981353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.981366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.981664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.981676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.982024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.982037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.982366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.982380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.982700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.982715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.983054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.983070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.983412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.983426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.983780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.983795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.984135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.984153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.984496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.984512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.984866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.984881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.985205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.985218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.985572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.985586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.985919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.985931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.986264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.986277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.986467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.986480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.986814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.986826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.987027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.987040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.987360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.987375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.987693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.987706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.987939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.987952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.988282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.988296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.988619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.988633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.989023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.989036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.989221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.989234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.443 [2024-12-06 13:37:23.989544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.443 [2024-12-06 13:37:23.989558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.443 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.989911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.989925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.990257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.990272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.990599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.990613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.990970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.990985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.991327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.991342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.991556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.991569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.991920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.991933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.992278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.992293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.992534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.992547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.992868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.992880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.993263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.993275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.993586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.993599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.993943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.993957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.994279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.994297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.994622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.994636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.995020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.995032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.995396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.995411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.995726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.995740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.995966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.995979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.996150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.996162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.996508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.996522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.996854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.996868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.997206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.997218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.997571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.997586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.997788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.997802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.998156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.998170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.998485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.998500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.998831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.998844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.999167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.999180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.999524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.999539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:23.999889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:23.999902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:24.000230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:24.000244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:24.000609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:24.000624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:24.001009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:24.001023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:24.001348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:24.001362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:24.001785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:24.001798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:24.001997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:24.002011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.444 [2024-12-06 13:37:24.002214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.444 [2024-12-06 13:37:24.002227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.444 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.002430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.002444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.002798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.002812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.003143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.003157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.003500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.003516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.003871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.003883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.004219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.004234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.004564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.004578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.004777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.004790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.005113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.005128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.005485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.005500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.005838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.005852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.006199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.006212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.006557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.006570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.006894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.006907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.007250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.007264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.007599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.007617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.007828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.007841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.008177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.008191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.008404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.008416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.008627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.008639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.008816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.008829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.009170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.009183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.009550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.009565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.009902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.009916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.010265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.010279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.010655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.010668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.010993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.011007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.011351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.011366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.011710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.011725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.012065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.012079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.012437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.012451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.012768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.012781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.013099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.013114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.013306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.013320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.013566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.013580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.013921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.013935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.014284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.014298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.014687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.445 [2024-12-06 13:37:24.014701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.445 qpair failed and we were unable to recover it. 00:29:37.445 [2024-12-06 13:37:24.015041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.015053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.015279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.015292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.015628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.015641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.015990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.016003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.016353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.016369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.016718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.016731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.016913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.016925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.017184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.017198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.017518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.017532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.017881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.017896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.018247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.018261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.018606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.018619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.018958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.018972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.019281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.019294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.019629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.019643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.019999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.020013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.020360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.020373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.020713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.020729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.021062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.021077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.021292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.021306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.021673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.021686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.021893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.021905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.022261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.022273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.022600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.022612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.022940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.022954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.023137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.023150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.023482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.023496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.023837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.023849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.024162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.024177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.024486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.024501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.024794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.024808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.025143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.025159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.025485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.025498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.025848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.025862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.026216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.026230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.026562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.026574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.026920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.026932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.027291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.446 [2024-12-06 13:37:24.027306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.446 qpair failed and we were unable to recover it. 00:29:37.446 [2024-12-06 13:37:24.027620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.027636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.027962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.027976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.028322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.028335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.028561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.028573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.028757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.028770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.029110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.029123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.029470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.029487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.029828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.029841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.030189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.030203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.030549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.030563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.030890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.030903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.031253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.031265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.031615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.031630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.031976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.031989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.032331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.032346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.032684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.032697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.033040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.033055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.033440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.033461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.033757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.033770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.034108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.034127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.034470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.034486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.034870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.034883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.035211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.035226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.035580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.035593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.035942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.035956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.036157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.036172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.036386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.036399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.036734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.036748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.037088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.037102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.037451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.037490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.037816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.037829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.038040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.038051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.038235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.447 [2024-12-06 13:37:24.038249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.447 qpair failed and we were unable to recover it. 00:29:37.447 [2024-12-06 13:37:24.038583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.038597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.038936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.038950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.039296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.039310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.039667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.039682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.040013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.040026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.040379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.040393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.040590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.040606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.040949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.040963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.041307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.041322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.041682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.041696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.042046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.042059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.042409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.042421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.042759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.042773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.043084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.043100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.043424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.043438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.043758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.043771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.044111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.044126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.044478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.044492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.044839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.044852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.045167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.045183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.045534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.045547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.045878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.045892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.046228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.046241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.046600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.046612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.046958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.046971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.047304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.047319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.047517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.047534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.047732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.047745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.048069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.048084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.048430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.048443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.048762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.048775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.048972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.048985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.049195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.049207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.049534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.049548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.049924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.049938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.050282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.050294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.050634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.050648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.050993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.051007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.448 qpair failed and we were unable to recover it. 00:29:37.448 [2024-12-06 13:37:24.051357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.448 [2024-12-06 13:37:24.051370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.051722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.051737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.052067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.052080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.052390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.052405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.052717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.052731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.053064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.053079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.053433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.053447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.053790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.053803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.054148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.054164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.054506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.054520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.054909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.054922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.055268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.055281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.055620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.055633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.055953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.055966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.056307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.056323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.056649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.056662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.057003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.057018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.057349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.057362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.057683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.057696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.058013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.058027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.058362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.058376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.058739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.058753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.059102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.059116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.059297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.059311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.059669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.059683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.060005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.060020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.060360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.060373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.060726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.060740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.060916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.060931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.061161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.061176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.061484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.061499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.061850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.061863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.062212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.062226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.062591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.062605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.062997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.063010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.063334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.063348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.063678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.063691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.064034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.064046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.449 qpair failed and we were unable to recover it. 00:29:37.449 [2024-12-06 13:37:24.064392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.449 [2024-12-06 13:37:24.064406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.450 qpair failed and we were unable to recover it. 00:29:37.450 [2024-12-06 13:37:24.064764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.450 [2024-12-06 13:37:24.064778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.450 qpair failed and we were unable to recover it. 00:29:37.450 [2024-12-06 13:37:24.065124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.450 [2024-12-06 13:37:24.065139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.450 qpair failed and we were unable to recover it. 00:29:37.450 [2024-12-06 13:37:24.065486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.450 [2024-12-06 13:37:24.065500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.450 qpair failed and we were unable to recover it. 00:29:37.450 [2024-12-06 13:37:24.065830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.450 [2024-12-06 13:37:24.065843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.450 qpair failed and we were unable to recover it. 00:29:37.450 [2024-12-06 13:37:24.066167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.450 [2024-12-06 13:37:24.066180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.450 qpair failed and we were unable to recover it. 00:29:37.450 [2024-12-06 13:37:24.066516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.450 [2024-12-06 13:37:24.066529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.450 qpair failed and we were unable to recover it. 00:29:37.450 [2024-12-06 13:37:24.066710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.450 [2024-12-06 13:37:24.066724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.450 qpair failed and we were unable to recover it. 00:29:37.726 [2024-12-06 13:37:24.067023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.726 [2024-12-06 13:37:24.067039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.726 qpair failed and we were unable to recover it. 00:29:37.726 [2024-12-06 13:37:24.067382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.726 [2024-12-06 13:37:24.067397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.726 qpair failed and we were unable to recover it. 00:29:37.726 [2024-12-06 13:37:24.067699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.726 [2024-12-06 13:37:24.067712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.726 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.068028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.068043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.068393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.068407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.068719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.068732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.069077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.069090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.069434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.069449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.069802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.069815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.070202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.070219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.070560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.070573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.070919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.070933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.071273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.071286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.071618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.071633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.071955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.071969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.072323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.072339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.072716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.072730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.073079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.073094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.073431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.073443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.073793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.073808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.074161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.074174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.074530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.074546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.074881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.074898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.075087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.075100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.075470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.075485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.075837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.075851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.076197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.076212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.076552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.076566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.076905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.076917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.077266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.077280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.077635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.077650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.077991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.078005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.078329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.078344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.078548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.078561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.078848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.078860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.079182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.079196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.079536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.079551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.079773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.079786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.080107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.080121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.080303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.727 [2024-12-06 13:37:24.080316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.727 qpair failed and we were unable to recover it. 00:29:37.727 [2024-12-06 13:37:24.080628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.080641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.080831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.080844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.081188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.081204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.081528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.081541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.081872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.081884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.082218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.082232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.082579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.082593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.082822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.082834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.083163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.083177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.083524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.083539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.083882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.083895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.084233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.084247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.084575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.084589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.084952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.084966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.085313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.085326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.085683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.085698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.086064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.086078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.086426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.086440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.086768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.086781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.087125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.087139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.087488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.087503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.087851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.087865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.088192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.088212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.088560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.088573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.088935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.088947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.089296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.089311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.089674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.089690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.090033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.090046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.090404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.090418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.090743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.090758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.091105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.091117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.091476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.091492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.091823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.091837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.092163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.092179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.092533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.092548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.092881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.092895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.093214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.093228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.093576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.728 [2024-12-06 13:37:24.093590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.728 qpair failed and we were unable to recover it. 00:29:37.728 [2024-12-06 13:37:24.093780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.093793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.094136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.094149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.094497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.094512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.094844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.094858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.095199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.095212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.095573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.095589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.095918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.095932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.096269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.096283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.096640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.096654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.096974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.096986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.097327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.097343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.097684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.097699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.098087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.098101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.098299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.098314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.098634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.098649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.098993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.099008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.099189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.099205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.099585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.099600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.099788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.099802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.100144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.100159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.100509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.100522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.100866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.100879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.101230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.101246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.101602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.101618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.101957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.101972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.102317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.102331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.102678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.102692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.103038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.103051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.103376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.103388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.103704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.103720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.104063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.104078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.104477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.104491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.104848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.104863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.105192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.105206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.105551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.105565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.105914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.105928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.106268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.106283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.729 [2024-12-06 13:37:24.106625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.729 [2024-12-06 13:37:24.106639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.729 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.106968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.106982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.107325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.107340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.107692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.107706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.108057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.108070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.108409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.108426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.108757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.108773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.109121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.109137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.109473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.109487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.109781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.109793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.110098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.110111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.110396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.110409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.110753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.110768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.112113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.112157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.112511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.112534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.112872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.112887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.113096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.113109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.113407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.113420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.113744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.113758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.114117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.114129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.114483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.114498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.114842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.114856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.115213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.115229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.115614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.115630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.115968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.115981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.116332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.116346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.116698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.116712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.117032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.117049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.117263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.117277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.117613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.117628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.117906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.117920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.118138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.118154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.118485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.118498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.118846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.118861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.119213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.119229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.119589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.119605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.119794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.119808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.120162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.120175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.730 [2024-12-06 13:37:24.120526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.730 [2024-12-06 13:37:24.120542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.730 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.120870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.120884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.121207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.121223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.121579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.121593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.121915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.121930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.122286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.122299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.123387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.123425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.123803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.123820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.124164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.124179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.124518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.124534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.124888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.124902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.125244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.125258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.125436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.125448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.125784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.125799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.126142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.126155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.126507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.126524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.126868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.126885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.127175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.127188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.127539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.127552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.127877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.127890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.128235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.128252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.128592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.128605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.128970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.128985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.129330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.129344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.129697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.129712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.130038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.130051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.130408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.130424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.130735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.130750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.131077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.131090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.131294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.131308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.131643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.131659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.731 [2024-12-06 13:37:24.131986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.731 [2024-12-06 13:37:24.131999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.731 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.132349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.132365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.132716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.132730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.132923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.132936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.133245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.133260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.133570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.133585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.133908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.133922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.134268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.134282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.134630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.134644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.134928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.134942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.135307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.135322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.135656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.135672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.136020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.136033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.136389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.136404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.136738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.136753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.137096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.137110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.137338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.137351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.137708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.137723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.138058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.138072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.138434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.138449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.138797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.138811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.139161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.139177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.139521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.139534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.139882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.139898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.140250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.140262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.140586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.140602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.140941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.140954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.141294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.141309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.141515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.141529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.141712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.141724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.142057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.142070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.142394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.142408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.142732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.142748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.143079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.143093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.143465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.143481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.143806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.143819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.144156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.144173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.144521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.144536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.144895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.732 [2024-12-06 13:37:24.144909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.732 qpair failed and we were unable to recover it. 00:29:37.732 [2024-12-06 13:37:24.145252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.145268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.145614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.145628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.145948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.145963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.146311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.146326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.146682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.146696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.147031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.147043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.147371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.147385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.147699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.147714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.148057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.148075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.148415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.148430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.148793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.148810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.149181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.149198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.149533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.149547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.149749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.149762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.150290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.150352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.150601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.150616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.150932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.150991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.151214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.151228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.151738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.151798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.152052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.152065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.152383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.152394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.152740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.152759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.152945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.152958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.153227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.153239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.153565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.153579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.153928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.153938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.154246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.154267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.154596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.154607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.154939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.154950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.155150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.155164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.155496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.155508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.155856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.155866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.156170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.156180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.156356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.156366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.156689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.156702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.157049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.157061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.157368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.157379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.157740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.733 [2024-12-06 13:37:24.157752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.733 qpair failed and we were unable to recover it. 00:29:37.733 [2024-12-06 13:37:24.158077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.158088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.158432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.158444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.158767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.158778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.159114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.159124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.159441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.159452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.159794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.159806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.160001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.160014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.160352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.160366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.160726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.160741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.161100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.161115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.161404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.161416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.161739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.161750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.162093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.162105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.162439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.162460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.162683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.162694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.163050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.163059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.163403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.163415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.163739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.163751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.164072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.164082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.164408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.164421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.164770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.164782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.165133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.165146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.165468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.165480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.165688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.165698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.166008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.166021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.166339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.166350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.166541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.166551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.166885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.166899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.167222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.167231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.168299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.168335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.168677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.168689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.168878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.168890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.169081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.169091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.169440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.169450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.169784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.169794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.170080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.170092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.170417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.170428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.170640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.170651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.734 [2024-12-06 13:37:24.170882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.734 [2024-12-06 13:37:24.170893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.734 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.171198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.171209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.171525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.171536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.171835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.171846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.172170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.172181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.172507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.172518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.172854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.172865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.173217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.173227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.173461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.173471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.173892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.173903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.174110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.174120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.174428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.174438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.174799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.174811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.175140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.175150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.175489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.175500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.175813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.175825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.176168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.176179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.176492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.176503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.176703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.176715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.177061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.177071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.177394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.177406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.177742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.177753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.178104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.178116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.178434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.178446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.178777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.178788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.179104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.179116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.179401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.179412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.179705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.179716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.180034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.180045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.180399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.180411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.180734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.180746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.181067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.181078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.181370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.181382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.181696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.181709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.182059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.182072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.182394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.182407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.182713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.182726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.182911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.182924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.183147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.735 [2024-12-06 13:37:24.183159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.735 qpair failed and we were unable to recover it. 00:29:37.735 [2024-12-06 13:37:24.183350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.183362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.183673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.183685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.183985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.183996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.184276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.184288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.184609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.184620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.184917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.184928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.185211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.185224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.185570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.185582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.185925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.185938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.186173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.186184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.186535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.186546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.186868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.186879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.187182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.187194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.187515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.187526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.187722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.187732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.188054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.188064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.188399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.188411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.188724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.188736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.189033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.189044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.189358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.189372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.189667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.189678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.190027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.190039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.190364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.190376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.190656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.190669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.191004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.191015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.191331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.191344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.191671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.191683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.192003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.192014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.192329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.192341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.192685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.192697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.193009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.193022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.193373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.193391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.193774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.193786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.736 [2024-12-06 13:37:24.194125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.736 [2024-12-06 13:37:24.194138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.736 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.194451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.194475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.194803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.194815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.195127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.195138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.195461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.195471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.195807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.195819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.196034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.196045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.196374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.196387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.196734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.196745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.197060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.197069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.197256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.197267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.197584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.197594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.197967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.197978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.198296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.198308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.198620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.198631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.198965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.198976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.199294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.199305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.199630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.199641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.199988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.200000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.200165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.200178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.200501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.200512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.200814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.200824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.201148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.201158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.201506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.201516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.201823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.201833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.202153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.202165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.202381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.202391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.202667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.202677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.203026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.203038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.203359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.203370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.203596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.203606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.203902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.203912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.204192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.204203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.204431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.204442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.204691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.204702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.204989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.205001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.205339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.205352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.205695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.205708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.205890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.737 [2024-12-06 13:37:24.205902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.737 qpair failed and we were unable to recover it. 00:29:37.737 [2024-12-06 13:37:24.206207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.206218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.206518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.206529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.206842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.206853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.207137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.207149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.207483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.207496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.207798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.207810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.208124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.208135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.208463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.208475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.208790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.208801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.208972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.208985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.209314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.209328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.209610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.209622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.209946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.209958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.210287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.210299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.210617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.210628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.210987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.210997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.211314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.211324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.211521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.211531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.211833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.211845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.212166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.212176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.212493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.212503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.212846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.212858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.213203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.213215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.213535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.213546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.213827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.213837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.214156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.214168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.214486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.214496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.214819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.214830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.215042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.215051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.215268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.215282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.215610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.215621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.215934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.215944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.216246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.216256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.216583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.216594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.216919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.216930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.217210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.217220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.217498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.217509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.217845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.217855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.738 [2024-12-06 13:37:24.218174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.738 [2024-12-06 13:37:24.218184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.738 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.218546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.218560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.218859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.218870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.219152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.219162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.219469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.219481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.219802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.219813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.220099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.220109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.220403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.220413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.220768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.220780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.221103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.221115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.221431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.221441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.221774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.221785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.222149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.222159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.222465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.222475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.222782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.222793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.223111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.223123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.223415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.223425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.223764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.223777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.224105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.224117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.224400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.224410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.224657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.224667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.224952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.224963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.225241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.225251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.225572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.225583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.225788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.225800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.226021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.226032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.226372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.226383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.226710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.226723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.226936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.226946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.227249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.227259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.227408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.227421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.227779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.227790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.228121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.228134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.228451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.228468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.228776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.228786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.229089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.229099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.229416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.229425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.229707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.229717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.229916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.229926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.739 qpair failed and we were unable to recover it. 00:29:37.739 [2024-12-06 13:37:24.230245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.739 [2024-12-06 13:37:24.230256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.230428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.230440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.230773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.230784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.231072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.231083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.231365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.231375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.231650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.231661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.231973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.231984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.232267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.232276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.232600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.232612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.232964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.232976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.233292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.233302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.233472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.233482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.233793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.233805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.234132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.234142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.234367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.234376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.234706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.234717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.234905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.234915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.235240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.235250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.235549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.235559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.235762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.235772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.236092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.236102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.236383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.236394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.236690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.236702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.237047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.237059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.237402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.237413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.237601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.237613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.237964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.237974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.238297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.238308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.238633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.238644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.238968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.238979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.239275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.239286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.239583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.239593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.239930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.239941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.240254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.740 [2024-12-06 13:37:24.240266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.740 qpair failed and we were unable to recover it. 00:29:37.740 [2024-12-06 13:37:24.240548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.240558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.240740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.240749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.241142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.241153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.241441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.241452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.241816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.241828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.242168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.242178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.242501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.242511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.242813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.242825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.243140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.243151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.243372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.243384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.243761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.243772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.244080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.244090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.244278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.244288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.244623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.244635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.244933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.244944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.245236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.245245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.245570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.245580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.245936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.245948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.246288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.246299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.246656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.246666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.246972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.246981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.247312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.247322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.247674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.247686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.248030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.248040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.248373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.248383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.248694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.248706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.248995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.249005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.249300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.249311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.249630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.249640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.249938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.249948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.250304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.250315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.250622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.250632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.250927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.250938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.251279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.251288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.251484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.251495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.251834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.251844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.252132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.252142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.252330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.252340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.741 qpair failed and we were unable to recover it. 00:29:37.741 [2024-12-06 13:37:24.252646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.741 [2024-12-06 13:37:24.252655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.252853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.252866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.253184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.253198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.253477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.253487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.253821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.253831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.254144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.254155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.254439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.254449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.254660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.254669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.254995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.255005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.255299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.255310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.255601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.255613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.255947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.255959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.256284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.256295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.256463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.256474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.256652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.256662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.257219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.257283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.257672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.257690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.258043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.258058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.258415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.258429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.258798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.258811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.259170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.259182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.259516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.259530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.259736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.259748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.260036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.260050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.260386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.260400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.260817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.260831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.261266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.261279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.261614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.261627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.261976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.261989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.262350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.262369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.262668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.262681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.263029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.263042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.263387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.263402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.263744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.263758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.263977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.263990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.264175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.264188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.264487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.264502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.264838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.264853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.742 [2024-12-06 13:37:24.265211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.742 [2024-12-06 13:37:24.265225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.742 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.265578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.265592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.265937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.265952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.266293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.266307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.266641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.266654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.266886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.266900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.267246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.267261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.267630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.267644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.267979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.267994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.268341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.268355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.268689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.268705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.269085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.269099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.269440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.269460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.269783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.269797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.270145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.270159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.270514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.270528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.270868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.270881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.271228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.271241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.271593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.271606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.271950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.271963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.272381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.272394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.272725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.272739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.273157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.273171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.273493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.273508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.273863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.273877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.274219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.274233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.274576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.274588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.274821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.274833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.275177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.275190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.275515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.275528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.275843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.275856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.276190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.276208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.276529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.276542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.276903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.276917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.276987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.277000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.277293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.277306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.277632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.277646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.277705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.277717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.277999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.743 [2024-12-06 13:37:24.278011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.743 qpair failed and we were unable to recover it. 00:29:37.743 [2024-12-06 13:37:24.278357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.278373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.278723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.278736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.278942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.278954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.279294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.279307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.279635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.279651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.279892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.279905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.280253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.280268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.280594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.280608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.280954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.280968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.281186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.281200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.281391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.281406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.281622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.281635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.281944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.281956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.282133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.282146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.282483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.282499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.282814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.282828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.283129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.283141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.283486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.283499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.283732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.283744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.284076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.284089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.284436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.284451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.284800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.284812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.285032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.285044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.285234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.285246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.285449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.285467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.285653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.285664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.285969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.285984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.286180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.286193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.286533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.286546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.286893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.286906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.287122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.287134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.287479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.287492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.287693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.287709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.288040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.288053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.288399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.288413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.288736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.288749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.289077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.289091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.289418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.744 [2024-12-06 13:37:24.289431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.744 qpair failed and we were unable to recover it. 00:29:37.744 [2024-12-06 13:37:24.289747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.289762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.290110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.290125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.290471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.290486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.290894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.290907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.291124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.291136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.291471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.291485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.291814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.291829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.292158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.292171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.292518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.292533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.292877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.292890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.293238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.293253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.293588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.293602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.293955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.293969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.294308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.294320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.294551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.294564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.294903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.294916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.295225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.295238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.295569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.295582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.295931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.295943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.296285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.296298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.296632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.296646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.297000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.297012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.297196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.297211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.297549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.297563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.297893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.297907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.298230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.298243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.298571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.298586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.298809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.298822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.299191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.299203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.299398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.299412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.299744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.299757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.300086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.300098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.300437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.745 [2024-12-06 13:37:24.300450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.745 qpair failed and we were unable to recover it. 00:29:37.745 [2024-12-06 13:37:24.300813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.300826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.301189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.301206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.301551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.301564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.301912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.301926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.302270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.302282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.302634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.302647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.302956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.302969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.303167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.303178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.303515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.303529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.303882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.303895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.304217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.304229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.304601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.304614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.304931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.304944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.305276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.305288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.305611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.305623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.305950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.305963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.306292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.306305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.306664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.306676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.306982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.306996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.307304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.307316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.307668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.307681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.308036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.308050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.308256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.308268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.308541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.308553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.308896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.308910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.309256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.309268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.309596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.309609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.309785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.309797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.310012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.310025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.310363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.310376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.310693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.310706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.311122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.311135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.311474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.311489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.311833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.311846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.312171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.312186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.312525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.312538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.312849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.312861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.313207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.746 [2024-12-06 13:37:24.313220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.746 qpair failed and we were unable to recover it. 00:29:37.746 [2024-12-06 13:37:24.313546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.313558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.313881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.313895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.314291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.314305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.314630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.314646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.314981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.314994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.315309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.315322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.315618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.315631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.315979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.315992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.316317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.316330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.316537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.316552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.316845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.316858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.317202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.317214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.317576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.317589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.317942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.317957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.318277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.318291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.318637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.318650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.318998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.319011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.319359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.319372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.319693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.319706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.320087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.320099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.320447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.320464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.320791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.320805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.321187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.321202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.321484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.321499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.321727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.321740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.322081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.322095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.322402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.322415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.322734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.322747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.323094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.323107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.323471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.323484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.323825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.323838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.324185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.324198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.324547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.324560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.324882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.324894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.325204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.325217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.325566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.325578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.325919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.325932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.326280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.326292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.747 qpair failed and we were unable to recover it. 00:29:37.747 [2024-12-06 13:37:24.326636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.747 [2024-12-06 13:37:24.326649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.326991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.327003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.327402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.327416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.327739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.327752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.328090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.328103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.328423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.328440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.328794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.328809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.329151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.329164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.329521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.329535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.329878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.329891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.330243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.330256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.330566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.330580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.330916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.330931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.331261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.331273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.331619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.331634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.331812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.331826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.332166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.332181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.332518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.332531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.332883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.332897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.333240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.333254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.333607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.333620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.333940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.333953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.334302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.334314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.334666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.334679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.335021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.335032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.335357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.335369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.335719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.335733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.335912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.335924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.336269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.336283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.336468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.336483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.336857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.336870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.337209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.337222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.337573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.337586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.337924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.337936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.338282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.338296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.338639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.338652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.339003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.339017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.339353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.339366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.748 qpair failed and we were unable to recover it. 00:29:37.748 [2024-12-06 13:37:24.339674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.748 [2024-12-06 13:37:24.339686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.339869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.339886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.340078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.340090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.340412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.340425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.340749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.340763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.341096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.341109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.341465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.341479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.341824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.341841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.342180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.342195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.342541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.342556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.342915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.342929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.343278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.343291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.343650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.343666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.343990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.344003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.344345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.344359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.344717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.344731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.345062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.345076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.345407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.345420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.345776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.345790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.346131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.346145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.346363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.346379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.346706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.346720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.347054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.347068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.347385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.347398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.347702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.347716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.347963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.347976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.348312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.348326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.348678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.348692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.349034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.349049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.349381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.349396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.349739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.349752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.350056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.350068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.350414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.350428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.749 qpair failed and we were unable to recover it. 00:29:37.749 [2024-12-06 13:37:24.350824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.749 [2024-12-06 13:37:24.350841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.351178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.351192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.351524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.351540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.351890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.351905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.352285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.352299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.352649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.352664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.352977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.352992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.353336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.353351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.353704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.353718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.354061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.354075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.354412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.354427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.354744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.354762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.355106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.355119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.355473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.355488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.355833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.355850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.356196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.356212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.356549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.356564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.356895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.356910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.357264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.357279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.357594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.357608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.357951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.357966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.358306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.358321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.358677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.358692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.359034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.359049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.359371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.359384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.359731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.359747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.360083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.360099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.360463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.360479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.360829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.360843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.361039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.361053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.361399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.361413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.361766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.361780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.362121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.362136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.362439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.362457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.362777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.362792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.362977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.362990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.363306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.363320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.363682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.363696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.750 [2024-12-06 13:37:24.364030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.750 [2024-12-06 13:37:24.364046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.750 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.364368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.364382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.364708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.364721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.365061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.365076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.365415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.365428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.365636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.365651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.365835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.365849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.366207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.366221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.366504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.366516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.366851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.366866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.367211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.367225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.367577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.367592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.367940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.367956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.368306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.368321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.368675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.368692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:37.751 [2024-12-06 13:37:24.369071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.751 [2024-12-06 13:37:24.369085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:37.751 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.369411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.369426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.369744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.369759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.370098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.370113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.370283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.370299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.370610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.370624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.370977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.370992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.371316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.371328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.371514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.371527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.371875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.371889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.372077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.372089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.372439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.372458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.372789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.372802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.373155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.373169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.373513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.373528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.373896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.373908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.374248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.374263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.374612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.374625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.374949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.374963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.375312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.375326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.375533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.375548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.375891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.375905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.029 qpair failed and we were unable to recover it. 00:29:38.029 [2024-12-06 13:37:24.376227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.029 [2024-12-06 13:37:24.376244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.376588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.376605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.376960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.376974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.377327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.377342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.377690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.377705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.378023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.378039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.378381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.378399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.378738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.378754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.379101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.379117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.379470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.379485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.379822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.379836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.380176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.380191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.380541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.380555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.380904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.380919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.381263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.381276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.381594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.381608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.381951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.381964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.382309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.382324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.382670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.382683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.383011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.383026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.383366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.383380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.383562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.383577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.383861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.383874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.384106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.384121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.384448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.384464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.384756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.384769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.384990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.385002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.385329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.385342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.385692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.385708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.386046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.386061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.386405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.386420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.386739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.386754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.387099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.387113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.387462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.387476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.387812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.387828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.388209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.388224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.388532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.388547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.388874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.388888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.030 qpair failed and we were unable to recover it. 00:29:38.030 [2024-12-06 13:37:24.389237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.030 [2024-12-06 13:37:24.389251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.389585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.389600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.389940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.389957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.390302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.390314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.390632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.390645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.391011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.391024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.391387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.391401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.391754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.391770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.392090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.392108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.392413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.392427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.392797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.392810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.393150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.393165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.393556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.393570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.393766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.393780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.394100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.394116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.394296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.394309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.394702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.394718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.395054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.395070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.395391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.395405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.395588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.395601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.395945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.395959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.396317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.396330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.396697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.396712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.396905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.396920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.397079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.397093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.397453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.397470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.397821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.397836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.398181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.398194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.398559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.398576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.398898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.398911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.399250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.399265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.399615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.399629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.399988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.400003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.400328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.400340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.400532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.400545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.400851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.400867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.401255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.401270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.401613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.401628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.031 qpair failed and we were unable to recover it. 00:29:38.031 [2024-12-06 13:37:24.401973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.031 [2024-12-06 13:37:24.401987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.402307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.402323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.402652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.402666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.403018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.403033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.403377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.403391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.403607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.403621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.403964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.403979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.404345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.404358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.404676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.404688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.405001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.405015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.405340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.405356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.405675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.405689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.406032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.406046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.406393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.406406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.406730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.406743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.407048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.407062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.407294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.407307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.407535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.407549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.407879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.407891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.408234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.408248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.408599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.408613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.408958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.408972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.409317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.409331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.409676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.409690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.410048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.410062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.410389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.410404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.410733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.410746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.410976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.410989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.411312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.411325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.411673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.411688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.412006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.412020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.412349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.412363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.412669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.412682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.412865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.412877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.413078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.413091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.413430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.413445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.413761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.413775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.414103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.032 [2024-12-06 13:37:24.414117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.032 qpair failed and we were unable to recover it. 00:29:38.032 [2024-12-06 13:37:24.414465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.414480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.414824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.414837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.415163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.415178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.415541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.415555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.415889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.415902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.416251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.416263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.416575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.416588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.416917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.416929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.417261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.417273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.417483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.417497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.417889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.417901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.418221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.418237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.418575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.418590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.418936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.418951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.419296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.419309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.419659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.419673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.420000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.420012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.420309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.420322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.420661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.420673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.421011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.421023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.421364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.421378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.421726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.421741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.422061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.422074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.422425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.422437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.422649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.422663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.423012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.423026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.423363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.423376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.423576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.423589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.423801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.423814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.424136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.424149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.424492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.424506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.424852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.424865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.425081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.425093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.033 [2024-12-06 13:37:24.425407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.033 [2024-12-06 13:37:24.425419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.033 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.425751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.425763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.426107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.426120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.426470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.426483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.426839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.426854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.427189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.427203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.427596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.427610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.427926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.427939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.428259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.428272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.428624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.428637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.428998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.429010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.429364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.429376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.429685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.429698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.430025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.430040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.430381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.430394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.430754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.430766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.431113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.431126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.431450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.431468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.431822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.431835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.432166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.432183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.432523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.432537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.432888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.432903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.433281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.433294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.433625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.433639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.433967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.433981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.434306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.434321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.434676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.434692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.435040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.435054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.435381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.435396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.435706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.435719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.436074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.436089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.436429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.436443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.436840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.436855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.437047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.437062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.437353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.437368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.034 [2024-12-06 13:37:24.437674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.034 [2024-12-06 13:37:24.437689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.034 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.438034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.438049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.438376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.438390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.438713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.438727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.439077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.439092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.439332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.439346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.439537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.439552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.439921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.439935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.440258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.440273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.440611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.440624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.440927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.440939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.441265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.441278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.441597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.441610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.441953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.441966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.442290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.442305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.442632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.442646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.442999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.443013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.443358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.443374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.443721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.443737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.444085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.444097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.444442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.444462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.444767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.444780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.445129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.445144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.445196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.445209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.445532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.445548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.445890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.445903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.446253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.446267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.446612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.446627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.446959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.446973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.447327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.447342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.447627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.447641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.447970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.447984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.448333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.448347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.448662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.448677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.449017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.449031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.449386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.449401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.449714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.449729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.450075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.450089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.035 [2024-12-06 13:37:24.450440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.035 [2024-12-06 13:37:24.450457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.035 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.450812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.450825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.451168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.451182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.451532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.451546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.451870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.451882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.452224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.452238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.452618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.452632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.452952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.452964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.453310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.453322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.453665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.453681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.454021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.454036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.454366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.454380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.454701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.454716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.455060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.455075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.455415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.455428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.455753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.455768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.456001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.456014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.456341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.456355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.456663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.456677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.456977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.456989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.457315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.457329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.457648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.457663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.458002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.458017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.458362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.458374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.458697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.458710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.459059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.459071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.459251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.459267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.459598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.459613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.459960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.459975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.460314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.460328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.460512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.460525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.460877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.460890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.461232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.461247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.461551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.461566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.461915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.461929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.462241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.462254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.462605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.462619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.462924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.462938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.036 [2024-12-06 13:37:24.463165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.036 [2024-12-06 13:37:24.463177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.036 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.463355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.463368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.463463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.463477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.463778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.463791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.464159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.464173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.464529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.464542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.464772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.464785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.464962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.464974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.465167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.465181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.465517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.465531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.465876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.465891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.466249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.466262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.466579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.466591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.466937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.466950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.467144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.467157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.467463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.467478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.467811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.467825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.468168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.468182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.468531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.468547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.468888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.468902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.469217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.469232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.469568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.469582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.469927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.469941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.470295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.470308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.470633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.470646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.470981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.470995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.471345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.471359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.471537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.471551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.471895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.471912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.472128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.472142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.472473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.472487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.472803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.472816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.473028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.473041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.473392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.473405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.473662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.473675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.473999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.474012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.474339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.474354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.474734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.474749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.475089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.037 [2024-12-06 13:37:24.475103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.037 qpair failed and we were unable to recover it. 00:29:38.037 [2024-12-06 13:37:24.475447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.475466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.475791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.475806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.476024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.476038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.476382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.476398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.476707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.476722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.477076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.477091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.477434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.477448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.477780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.477795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.478113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.478128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.478473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.478488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.478665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.478678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.478980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.478993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.479332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.479345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.479693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.479709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.479912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.479926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.480304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.480318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.480498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.480512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.480841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.480855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.481196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.481209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.481551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.481565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.481910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.481924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.482271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.482284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.482609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.482621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.482946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.482960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.483304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.483318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.483679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.483693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.484050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.484063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.484396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.484410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.484605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.484616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.484835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.484849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.485166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.485180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.485529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.485542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.485862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.038 [2024-12-06 13:37:24.485876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.038 qpair failed and we were unable to recover it. 00:29:38.038 [2024-12-06 13:37:24.486239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.486252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.486595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.486609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.486806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.486818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.487150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.487164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.487505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.487518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.487890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.487903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.488112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.488123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.488472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.488486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.488837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.488852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.489195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.489208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.489557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.489570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.489916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.489929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.490283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.490296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.490622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.490635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.490968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.490980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.491314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.491326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.491679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.491694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.492048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.492061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.492384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.492398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.492726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.492740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.493095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.493108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.493479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.493492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.493725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.493737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.494049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.494061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.494405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.494418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.494780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.494795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.495142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.495154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.495544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.495558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.495887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.495901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.496233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.496248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.496462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.496475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.496813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.496828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.497274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.497288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.497649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.497662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.497734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.497746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.498046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.498059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.498411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.498426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.498738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.039 [2024-12-06 13:37:24.498752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.039 qpair failed and we were unable to recover it. 00:29:38.039 [2024-12-06 13:37:24.499072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.499085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.499284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.499299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.499633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.499647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.500005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.500018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.500367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.500380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.500695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.500708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.501052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.501065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.501427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.501439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.501762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.501775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.502126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.502138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.502491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.502505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.502700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.502711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.503003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.503016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.503369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.503384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.503733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.503746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.504090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.504102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.504461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.504474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.504818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.504832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.505175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.505188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.505495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.505508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.505878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.505890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.506233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.506245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.506606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.506620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.506946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.506960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.507307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.507320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.507507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.507520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.507729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.507742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.508025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.508038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.508233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.508245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.508580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.508593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.508807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.508819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.509144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.509157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.509508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.509521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.509848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.509860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.510202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.510214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.510540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.510553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.510881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.510893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.511206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.040 [2024-12-06 13:37:24.511220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.040 qpair failed and we were unable to recover it. 00:29:38.040 [2024-12-06 13:37:24.511575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.511591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.511925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.511937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.512299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.512312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.512662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.512676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.512886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.512897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.513065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.513079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.513399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.513414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.513749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.513762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.514121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.514133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.514449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.514471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.514653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.514667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.514968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.514981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.515301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.515314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.515504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.515520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.515849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.515863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.516209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.516222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.516583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.516596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.516930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.516942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.517287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.517301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.517629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.517642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.517889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.517901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.518248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.518263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.518606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.518621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.518849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.518861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.519062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.519075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.519487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.519500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.519845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.519859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.520204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.520218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.520548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.520561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.520879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.520892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.521232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.521247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.521596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.521609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.521935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.521950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.522291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.522305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.522631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.522644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.522960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.522973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.523303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.523316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.523497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.523509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.041 qpair failed and we were unable to recover it. 00:29:38.041 [2024-12-06 13:37:24.523887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.041 [2024-12-06 13:37:24.523900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.524216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.524228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.524577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.524594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.524946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.524958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.525285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.525298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.525659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.525672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.526015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.526030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.526374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.526386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.526735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.526749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.527053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.527066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.527268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.527281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.527614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.527629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.527996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.528009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.528349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.528364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.528567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.528582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.528782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.528796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.529137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.529149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.529475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.529488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.529811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.529823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.530172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.530185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.530530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.530544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.530727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.530742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.531083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.531098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.531321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.531337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.531653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.531668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.531848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.531861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.532215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.532229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.532579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.532592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.532959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.532972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.533288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.533301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.533644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.533658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.534003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.534015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.534370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.534382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.534567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.534582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.534929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.534943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.535153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.535166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.535477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.535491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.535840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.042 [2024-12-06 13:37:24.535854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.042 qpair failed and we were unable to recover it. 00:29:38.042 [2024-12-06 13:37:24.536196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.536209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.536440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.536451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.536857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.536871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.537192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.537208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.537567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.537580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.537925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.537938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.538282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.538295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.538605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.538618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.538962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.538975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.539315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.539328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.539677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.539691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.539890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.539902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.540098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.540112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.540411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.540423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.540745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.540758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.541098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.541113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.541468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.541483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.541831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.541844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.542186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.542198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.542549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.542562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.542757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.542769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.543089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.543102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.543297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.543311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.543623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.543637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.543934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.543946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.544294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.544307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.544631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.544644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.544968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.544981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.545340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.545354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.545701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.545714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.546052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.546067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.546411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.546427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.546750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.043 [2024-12-06 13:37:24.546765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-12-06 13:37:24.547094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.547108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.547450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.547474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.547703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.547716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.548041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.548056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.548389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.548403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.548720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.548734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.549070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.549084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.549434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.549448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.549778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.549793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.550160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.550175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.550516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.550530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.550878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.550890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.551232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.551245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.551599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.551614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.551934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.551947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.552334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.552347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.552679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.552693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.553019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.553032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.553427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.553440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.553748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.553761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.553963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.553977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.554290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.554304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.554626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.554639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.554982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.554996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.555344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.555357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.555683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.555696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.556042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.556055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.556450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.556472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.556826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.556840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.557166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.557180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.557369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.557383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.557698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.557712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.558030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.558044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.558387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.558401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.558754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.558769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.559094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.559108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.559462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.559477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.559824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.559837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-12-06 13:37:24.560175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.044 [2024-12-06 13:37:24.560195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.560533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.560546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.560733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.560746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.561110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.561124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.561510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.561524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.561694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.561705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.562021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.562036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.562347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.562360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.562736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.562751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.563068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.563081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.563415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.563430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.563746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.563760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.564100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.564115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.564467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.564480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.564833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.564847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.565048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.565063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.565397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.565411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.565722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.565736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.566078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.566092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.566445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.566467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.566785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.566797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.567137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.567152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.567479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.567493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.567812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.567826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.568160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.568174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.568519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.568534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.568886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.568899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.569285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.569298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.569579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.569593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.569933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.569946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.570289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.570304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.570712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.570726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.571074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.571088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.571415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.571428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.571747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.571762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.572107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.572121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.572469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.572483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.572825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.572838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-12-06 13:37:24.573197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.045 [2024-12-06 13:37:24.573211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.573400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.573412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.573694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.573713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.574030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.574043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.574430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.574442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.574760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.574774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.575129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.575142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.575488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.575503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.575825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.575838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.576162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.576177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.576261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.576273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.576553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.576566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.576891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.576905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.577259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.577272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.577604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.577620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.577951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.577963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.578318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.578333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.578611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.578624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.578947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.578959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.579314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.579328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.579673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.579689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.580016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.580030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.580377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.580391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.580735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.580749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.581090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.581104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.581448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.581469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.581800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.581813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.581993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.582005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.582218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.582232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.582565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.582580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.582880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.582893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.583222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.583236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.583575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.583588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.583950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.583964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.584286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.584300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.584627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.584641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.584957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.584970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.585322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.585336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.046 [2024-12-06 13:37:24.585686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.046 [2024-12-06 13:37:24.585699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.046 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.586020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.586033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.586376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.586391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.586578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.586591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.586943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.586959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.587298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.587312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.587686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.587699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.588050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.588064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.588409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.588422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.588615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.588629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.588966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.588978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.589326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.589341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.589701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.589715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.590048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.590061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.590420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.590435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.590757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.590772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.591113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.591127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.591477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.591491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.591757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.591771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.592092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.592107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.592459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.592473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.592783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.592796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.593011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.593025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.593344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.593356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.593695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.593708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.593895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.593908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.594261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.594275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.594616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.594632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.594965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.594979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.595174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.595188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.595535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.595549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.595896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.595911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.047 [2024-12-06 13:37:24.596259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.047 [2024-12-06 13:37:24.596272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.047 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.596471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.596485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.596773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.596787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.597142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.597158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.597507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.597520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.597841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.597853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.598196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.598211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.598556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.598570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.598916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.598931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.599284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.599300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.599628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.599642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.599957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.599972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.600330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.600349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.600668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.600681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.600996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.601009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.601393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.601408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.601738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.601751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.602086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.602099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.602466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.602480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.602781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.602793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.603103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.603117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.603433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.603445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.603792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.603806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.604143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.604158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.604513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.604527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.604942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.604958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.605297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.605311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.605620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.605634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.605962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.605975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.606319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.606335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.606672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.606687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.607012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.607024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.607345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.607363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.607695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.607709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.608053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.608067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.608395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.608410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.608751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.608765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.609111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.609125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.048 [2024-12-06 13:37:24.609466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.048 [2024-12-06 13:37:24.609481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.048 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.609692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.609706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.610021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.610035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.610372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.610386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.610698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.610711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.611057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.611072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.611417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.611432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.611749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.611763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.612103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.612118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.612443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.612464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.612854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.612870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.613207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.613220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.613563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.613577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.613909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.613921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.614257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.614272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.614623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.614638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.614952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.614967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.615381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.615395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.615726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.615741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.616093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.616106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.616462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.616474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.616803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.616817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.617161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.617176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.617518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.617532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.617851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.617864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.618218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.618231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.618615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.618630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.619003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.619016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.619358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.619372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.619689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.619703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.620044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.620059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.620410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.620425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.620813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.620827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.621170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.621183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.621533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.621547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.621896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.621912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.622250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.622263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.622613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.622629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.622968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.049 [2024-12-06 13:37:24.622982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.049 qpair failed and we were unable to recover it. 00:29:38.049 [2024-12-06 13:37:24.623323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.623339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.623676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.623691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.624019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.624034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.624380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.624395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.624748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.624762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.625102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.625115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.625466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.625480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.625827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.625840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.626167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.626181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.626523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.626537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.626899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.626913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.627247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.627261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.627606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.627623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.627855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.627868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.628053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.628066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.628413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.628433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.628652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.628666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.629017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.629031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.629350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.629363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.629719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.629734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.629941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.629954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.630146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.630161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.630338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.630352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.630713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.630728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.631055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.631069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.631409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.631425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.631765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.631780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.632119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.632133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.632494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.632509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.632840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.632855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.633192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.633209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.633554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.633568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.633909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.633923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.634270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.634283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.634512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.634524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.634851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.634864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.635203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.635219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.635565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.050 [2024-12-06 13:37:24.635580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.050 qpair failed and we were unable to recover it. 00:29:38.050 [2024-12-06 13:37:24.635929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.635944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.636270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.636285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.636600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.636615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.636955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.636968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.637310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.637324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.637665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.637682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.638021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.638039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.638380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.638393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.638715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.638731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.639049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.639065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.639380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.639394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.639707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.639722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.640062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.640077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.640418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.640432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.640787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.640803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.641142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.641156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.641496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.641511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.641873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.641894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.642229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.642241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.642589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.642603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.642953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.642969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.643314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.643329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.643677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.643690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.644039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.644053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.644403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.644419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.644828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.644842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.645185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.645200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.645540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.645554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.645798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.645811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.646185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.646198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.646525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.646540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.646898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.646912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.647253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.647267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.647625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.647640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.647984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.647999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.648333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.648348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.648695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.648707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.649059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.051 [2024-12-06 13:37:24.649073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.051 qpair failed and we were unable to recover it. 00:29:38.051 [2024-12-06 13:37:24.649268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.649283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.649611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.649625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.649982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.649997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.650350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.650363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.650711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.650725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.651072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.651088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.651415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.651430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.651747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.651764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.651973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.651987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.652304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.652319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.652677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.652692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.653041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.653056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.653287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.653300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.653684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.653699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.654037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.654052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.654406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.654421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.654747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.654761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.655112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.655126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.655471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.655485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.655829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.655845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.656186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.656199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.656550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.656565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.656893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.656906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.657254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.657268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.657612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.657626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.657965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.657979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.658313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.658326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.658666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.658681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.659060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.659073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.659433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.659447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.659781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.659795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.660150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.660165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.660471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.660485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.052 [2024-12-06 13:37:24.660799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.052 [2024-12-06 13:37:24.660812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.052 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.661146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.661161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.661519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.661532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.661765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.661778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.662028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.662041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.662365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.662377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.662706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.662720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.662936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.662948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.663178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.663193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.663521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.663534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.663904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.663918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.664234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.664246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.664599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.664612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.664931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.664944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.665147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.665160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.665469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.665482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.665858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.665872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.666191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.666205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.666554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.666568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.666916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.666931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.667280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.667294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.667630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.667645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.667956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.667970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.668291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.668305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.668652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.668666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.669006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.669021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.669381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.669397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.053 qpair failed and we were unable to recover it. 00:29:38.053 [2024-12-06 13:37:24.669745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.053 [2024-12-06 13:37:24.669760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.338 qpair failed and we were unable to recover it. 00:29:38.338 [2024-12-06 13:37:24.669974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.338 [2024-12-06 13:37:24.669992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.338 qpair failed and we were unable to recover it. 00:29:38.338 [2024-12-06 13:37:24.670332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.338 [2024-12-06 13:37:24.670345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.338 qpair failed and we were unable to recover it. 00:29:38.338 [2024-12-06 13:37:24.670722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.338 [2024-12-06 13:37:24.670736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.338 qpair failed and we were unable to recover it. 00:29:38.338 [2024-12-06 13:37:24.671025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.338 [2024-12-06 13:37:24.671037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.338 qpair failed and we were unable to recover it. 00:29:38.338 [2024-12-06 13:37:24.671376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.338 [2024-12-06 13:37:24.671389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.338 qpair failed and we were unable to recover it. 00:29:38.338 [2024-12-06 13:37:24.671715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.338 [2024-12-06 13:37:24.671728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.338 qpair failed and we were unable to recover it. 00:29:38.338 [2024-12-06 13:37:24.671962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.338 [2024-12-06 13:37:24.671975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.338 qpair failed and we were unable to recover it. 00:29:38.338 [2024-12-06 13:37:24.672294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.338 [2024-12-06 13:37:24.672306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.338 qpair failed and we were unable to recover it. 00:29:38.338 [2024-12-06 13:37:24.672638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.338 [2024-12-06 13:37:24.672651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.338 qpair failed and we were unable to recover it. 00:29:38.338 [2024-12-06 13:37:24.672991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.338 [2024-12-06 13:37:24.673008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.338 qpair failed and we were unable to recover it. 00:29:38.338 [2024-12-06 13:37:24.673354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.338 [2024-12-06 13:37:24.673367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.338 qpair failed and we were unable to recover it. 00:29:38.338 [2024-12-06 13:37:24.673707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.338 [2024-12-06 13:37:24.673720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.338 qpair failed and we were unable to recover it. 00:29:38.338 [2024-12-06 13:37:24.674108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.674122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.674491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.674505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.674845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.674857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.675199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.675212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.675556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.675571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.675906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.675919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.676277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.676292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.676656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.676669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.676972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.676985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.677326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.677339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.677746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.677760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.678099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.678113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.678464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.678477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.678812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.678828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.679174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.679187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.679513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.679528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.679885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.679898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.680242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.680256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.680615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.680629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.680969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.680983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.681329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.681343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.681678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.681691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.682034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.682047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.682391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.682405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.682708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.682722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.683064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.683079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.683422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.683438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.683751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.683766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.684104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.684117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.684450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.684471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.684817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.684830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.685153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.685167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.685516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.685530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.685876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.685890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.686110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.686123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.686449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.686467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.686753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.686766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.687093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.339 [2024-12-06 13:37:24.687107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.339 qpair failed and we were unable to recover it. 00:29:38.339 [2024-12-06 13:37:24.687335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.687349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.687621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.687634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.687966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.687979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.688337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.688350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.688541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.688555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.688846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.688861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.689207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.689222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.689545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.689558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.689889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.689902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.690242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.690255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.690621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.690635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.690972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.690986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.691333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.691347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.691701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.691715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.691908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.691920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.692258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.692272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.692654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.692669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.692999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.693011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.693335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.693349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.693657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.693671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.693867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.693881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.694203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.694218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.694566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.694579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.694902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.694915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.695235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.695248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.695646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.695660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.696006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.696021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.696339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.696352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.696671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.696693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.697024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.697037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.697345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.697360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.697702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.697715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.698065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.698079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.698411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.698426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.698767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.698781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.699121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.699136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.699448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.340 [2024-12-06 13:37:24.699470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.340 qpair failed and we were unable to recover it. 00:29:38.340 [2024-12-06 13:37:24.699806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.699820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.700170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.700185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.700536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.700549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.700898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.700912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.701267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.701280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.701629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.701644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.701960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.701974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.702317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.702331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.702630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.702644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.703002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.703016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.703342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.703356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.703676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.703690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.704047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.704061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.704407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.704421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.704733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.704747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.705104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.705117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.705446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.705467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.705770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.705783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.706127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.706141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.706487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.706501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.706812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.706826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.707157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.707169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.707504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.707520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.707872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.707885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.708241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.708255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.708610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.708624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.708975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.708989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.709338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.709352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.709682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.709695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.710043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.710057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.710381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.710395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.710756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.710771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.711104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.711118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.711477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.711491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.711830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.711844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.712191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.712205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.712545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.712560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.712897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.341 [2024-12-06 13:37:24.712911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.341 qpair failed and we were unable to recover it. 00:29:38.341 [2024-12-06 13:37:24.713254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.713268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.713631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.713646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.713975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.713989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.714348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.714362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.714709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.714723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.715067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.715081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.715415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.715429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.715744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.715759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.716102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.716115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.716423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.716435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.716758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.716772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.717128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.717141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.717481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.717494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.717831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.717845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.718200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.718213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.718572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.718585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.718919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.718932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.719263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.719277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.719621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.719634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.719984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.719998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.720375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.720390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.720693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.720705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.721054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.721067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.721422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.721435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.721778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.721792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.722137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.722150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.722378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.722390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.722734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.722748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.723092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.723106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.723465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.723478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.723832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.723845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.724193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.724206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.724537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.724550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.724893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.724908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.725248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.725262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.725609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.725624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.725961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.725974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.726329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.726344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.342 qpair failed and we were unable to recover it. 00:29:38.342 [2024-12-06 13:37:24.726658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.342 [2024-12-06 13:37:24.726672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.726995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.727009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.727332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.727346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.727671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.727685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.728030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.728044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.728382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.728396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.728724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.728739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.729094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.729109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.729453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.729473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.729708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.729721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.730072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.730086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.730408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.730421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.730739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.730755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.731096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.731109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.731464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.731479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.731720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.731734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.731793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.731803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.732155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.732168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.732520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.732534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.732880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.732894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.733200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.733213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.733541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.733557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.733903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.733918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.734256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.734270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.734583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.734597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.734942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.734957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.735135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.735150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.735356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.735370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.735698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.735711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.736056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.736069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.736416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.736429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.736771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.343 [2024-12-06 13:37:24.736785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.343 qpair failed and we were unable to recover it. 00:29:38.343 [2024-12-06 13:37:24.737128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.737142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.737493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.737508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.737845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.737858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.738206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.738221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.738553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.738568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.738897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.738912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.739255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.739269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.739615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.739629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.739978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.739991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.740319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.740334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.740670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.740683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.741000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.741012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.741362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.741375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.741570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.741583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.741886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.741900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.742245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.742259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.742607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.742621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.742953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.742968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.743317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.743331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.743676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.743689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.744007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.744021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.744359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.744373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.744727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.744741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.745077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.745092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.745404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.745417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.745735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.745750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.746091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.746105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.746430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.746444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.746781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.746796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.746989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.747003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.747330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.747347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.747686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.747702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.748016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.748031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.748370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.344 [2024-12-06 13:37:24.748384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.344 qpair failed and we were unable to recover it. 00:29:38.344 [2024-12-06 13:37:24.748703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.748718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.749094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.749108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.749409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.749423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.749735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.749750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.750099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.750113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.750425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.750439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.750792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.750807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.751152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.751166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.751521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.751535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.751854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.751867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.752191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.752206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.752526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.752539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.752852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.752865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.753206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.753220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.753560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.753575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.753899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.753912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.754221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.754236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.754576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.754589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.754915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.754929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.755240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.755254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.755452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.755481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.755774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.755787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.756138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.756153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.756489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.756503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.756843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.756857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.757192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.757207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.757555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.757568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.757886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.757901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.758252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.758265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.758594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.758610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.758930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.758943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.759230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.759242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.759592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.759606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.759918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.759933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.760289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.760302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.760631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.760647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.760951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.760966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.345 [2024-12-06 13:37:24.761310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.345 [2024-12-06 13:37:24.761325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.345 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.761669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.761683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.762028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.762043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.762392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.762405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.762749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.762763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.763103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.763116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.763463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.763478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.763830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.763843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.764183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.764198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.764544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.764557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.764885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.764899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.765243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.765256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.765609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.765624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.765971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.765984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.766329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.766343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.766672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.766686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.767039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.767052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.767385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.767398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.767740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.767755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.768106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.768120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.768465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.768481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.768820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.768833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.769208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.769221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.769564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.769577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.769772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.769786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.770083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.770096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.770435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.770449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.770798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.770812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.771005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.771018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.771335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.771349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.771707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.771721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.772046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.772061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.772419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.772433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.772787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.772803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.773148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.773163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.773504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.773518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.773881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.773894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.774240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.774253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.346 [2024-12-06 13:37:24.774559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.346 [2024-12-06 13:37:24.774572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.346 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.774870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.774886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.775227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.775242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.775592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.775606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.775930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.775945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.776306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.776319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.776663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.776678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.777034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.777047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.777426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.777440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.777749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.777761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.777967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.777981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.778181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.778194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.778520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.778533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.778740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.778752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.779078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.779091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.779440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.779459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.779776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.779789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.780048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.780060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.780398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.780412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.780729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.780741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.781087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.781102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.781417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.781431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.781744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.781758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.782081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.782095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.782412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.782426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.782771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.782786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.783125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.783140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.783482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.783498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.783831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.783845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.784146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.784158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.784512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.784526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.784875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.784889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.785234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.347 [2024-12-06 13:37:24.785246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.347 qpair failed and we were unable to recover it. 00:29:38.347 [2024-12-06 13:37:24.785567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.785582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.785915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.785927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.786245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.786260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.786630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.786643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.786984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.786998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.787316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.787329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.787670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.787684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.788009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.788022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.788370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.788386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.788764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.788777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.789123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.789138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.789481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.789494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.789832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.789847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.790183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.790197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.790544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.790558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.790902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.790917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.791276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.791289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.791633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.791648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.791996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.792009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.792321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.792334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.792695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.792708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.793056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.793070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.793398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.793411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.793758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.793773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.794123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.794136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.794477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.794493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.794867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.348 [2024-12-06 13:37:24.794880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.348 qpair failed and we were unable to recover it. 00:29:38.348 [2024-12-06 13:37:24.795200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.795214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.795569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.795582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.795926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.795941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.796251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.796265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.796602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.796617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.796953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.796967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.797305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.797320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.797683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.797697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.798049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.798063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.798313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.798326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.798672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.798687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.799033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.799046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.799392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.799406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.799737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.799751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.800095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.800109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.800414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.800427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.800657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.800669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.800996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.801009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.801348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.801362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.801686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.801700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.802038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.802052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.802281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.802296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.802499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.802512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.802702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.802715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.803054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.803067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.803405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.803420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.803734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.803748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.803946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.803959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.804295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.804309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.804669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.804682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.805010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.805025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.805380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.805392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.805724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.805737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.806115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.806127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.806469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.806484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.349 [2024-12-06 13:37:24.806822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.349 [2024-12-06 13:37:24.806835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.349 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.807161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.807175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.807489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.807502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.807724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.807736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.808080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.808093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.808418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.808432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.808780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.808792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.809135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.809150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.809492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.809505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.809856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.809870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.810218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.810233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.810547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.810561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.810914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.810928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.811251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.811265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.811612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.811627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.811980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.811992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.812383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.812396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.812592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.812604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.812893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.812906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.813226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.813239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.813589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.813602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.813929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.813944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.814304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.814317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.814674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.814690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.814996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.815009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.815367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.815380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.815694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.815709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.816053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.816066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.816441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.816460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.816756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.816770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.817110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.817125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.817469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.817482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.817801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.817814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.818148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.818162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.818515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.818529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.818865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.818878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.819229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.819243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.819435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.819448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.350 [2024-12-06 13:37:24.819757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.350 [2024-12-06 13:37:24.819772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.350 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.820115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.820128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.820476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.820491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.820830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.820844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.821195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.821209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.821585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.821598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.821802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.821814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.822135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.822148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.822488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.822501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.822874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.822887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.823243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.823256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.823601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.823614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.823922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.823934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.824274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.824289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.824684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.824697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.825025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.825039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.825358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.825370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.825689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.825701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.826056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.826069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.826395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.826412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.826756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.826771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2339157 Killed "${NVMF_APP[@]}" "$@" 00:29:38.351 [2024-12-06 13:37:24.827120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.827136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.827470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.827485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.827828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.827841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:38.351 [2024-12-06 13:37:24.828166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.828181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:38.351 [2024-12-06 13:37:24.828527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.828541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:38.351 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:38.351 [2024-12-06 13:37:24.828895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.828910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.351 [2024-12-06 13:37:24.829250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.829264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.829592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.829607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.829936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.829949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.830175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.830188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.830522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.830535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.830855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.830870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.831194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.831206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.831543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.831558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.351 [2024-12-06 13:37:24.831900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.351 [2024-12-06 13:37:24.831913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.351 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.832135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.832147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.832497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.832510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.832929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.832944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.833275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.833289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.833616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.833629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.834005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.834019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.834234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.834246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.834562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.834576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.834907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.834921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.835247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.835260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.835617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.835632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.835982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.835996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.836189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.836201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.836399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.836412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.836744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.836757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.837102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.837117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.837473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.837493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2340188 00:29:38.352 [2024-12-06 13:37:24.837837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.837852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2340188 00:29:38.352 [2024-12-06 13:37:24.838169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.838183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:38.352 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2340188 ']' 00:29:38.352 [2024-12-06 13:37:24.838400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.838413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.352 [2024-12-06 13:37:24.838768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.838782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.352 [2024-12-06 13:37:24.838991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.839006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.352 [2024-12-06 13:37:24.839367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.839382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 13:37:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.352 [2024-12-06 13:37:24.839811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.839830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.840124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.840140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.840361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.840375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.840709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.840724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.841067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.352 [2024-12-06 13:37:24.841083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.352 qpair failed and we were unable to recover it. 00:29:38.352 [2024-12-06 13:37:24.841408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.841423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.841793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.841809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.842014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.842029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.842251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.842266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.842598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.842615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.842920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.842935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.843138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.843153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.843268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.843282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.843594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.843609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.843944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.843960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.844310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.844328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.844565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.844581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.844969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.844985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.845319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.845335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.845682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.845700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.846084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.846100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.846439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.846453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.846671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.846687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.847005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.847020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.847352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.847368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.847561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.847578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.847836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.847850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.848188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.848203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.848482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.848498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.848863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.848879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.849226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.849243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.849571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.849586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.849838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.849854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.850193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.850210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.850439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.850460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.850818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.850834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.851176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.851192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.851415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.851430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.851772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.851788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.852129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.852143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.852249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.852261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.852578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.852592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.852939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.353 [2024-12-06 13:37:24.852955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.353 qpair failed and we were unable to recover it. 00:29:38.353 [2024-12-06 13:37:24.853311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.853326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.853655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.853670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.854015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.854030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.854380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.854395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.854633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.854649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.854882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.854898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.855241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.855258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.855608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.855623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.855932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.855946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.856289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.856301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.856662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.856677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.857004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.857018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.857391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.857409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.857743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.857756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.858089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.858105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.858451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.858485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.858843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.858856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.859246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.859260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.859577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.859591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.859828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.859841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.860061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.860075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.860403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.860418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.860764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.860782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.861126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.861141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.861331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.861344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.861532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.861546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.861864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.861877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.862212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.862226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.862538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.862553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.862753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.862767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.863058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.863073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.863408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.863422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.863726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.863743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.864076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.864092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.864283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.864301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.864521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.864538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.864770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.864785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.354 [2024-12-06 13:37:24.865129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.354 [2024-12-06 13:37:24.865146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.354 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.865226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.865240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.865560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.865577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.865933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.865950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.866304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.866319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.866532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.866551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.866917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.866934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.867285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.867299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.867520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.867536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.867767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.867781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.868156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.868171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.868528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.868543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.868872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.868887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.869245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.869261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.869460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.869475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.869786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.869803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.870147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.870162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.870358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.870371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.870688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.870702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.871055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.871071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.871435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.871450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.871657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.871673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.872067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.872082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.872442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.872465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.872694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.872709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.872914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.872927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.873121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.873137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.873489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.873504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.873850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.873865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.874234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.874247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.874592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.874606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.874912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.874928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.875148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.875163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.875549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.875567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.875751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.875767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.876121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.355 [2024-12-06 13:37:24.876135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.355 qpair failed and we were unable to recover it. 00:29:38.355 [2024-12-06 13:37:24.876465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.876481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.876724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.876738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.877055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.877071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.877276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.877290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.877648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.877662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.877851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.877866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.878188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.878206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.878419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.878436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.878777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.878791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.879002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.879016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.879379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.879395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.879733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.879747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.880110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.880126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.880329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.880344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.880655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.880672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.881014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.881027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.881179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.881193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.881530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.881547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.881756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.881771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.882108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.882125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.882441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.882463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.882763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.882779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.882961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.882978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.883336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.883352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.883690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.883705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.884097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.884111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.884428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.884442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.884924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.884938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.885306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.885321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.885556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.885572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.885874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.885890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.886227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.886241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.886577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.886591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.886953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.886968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.887337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.887353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.887540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.887555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.887880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.887895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.888230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.356 [2024-12-06 13:37:24.888246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.356 qpair failed and we were unable to recover it. 00:29:38.356 [2024-12-06 13:37:24.888591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.888605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.888795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.888809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.889173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.889190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.889535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.889552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.889901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.889916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.890260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.890273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.890624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.890638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.890988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.891002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.891382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.891398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.891733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.891748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.891939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.891952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.892242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.892257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.892577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.892592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.892799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.892815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.893152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.893169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.893358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.893372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.893688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.893704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.894050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.894064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.894465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.894479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.894836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.894850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.895082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.895097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.895299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.895316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.895661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.895678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.896014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.896028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.896387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.896404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.896751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.896765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.897093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.897109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.897469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.897486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.897844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.897861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.898052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.898068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.898382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.898397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.898592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.357 [2024-12-06 13:37:24.898608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.357 qpair failed and we were unable to recover it. 00:29:38.357 [2024-12-06 13:37:24.898754] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:29:38.357 [2024-12-06 13:37:24.898823] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.358 [2024-12-06 13:37:24.898939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.898955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.899321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.899336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.899688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.899703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.900061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.900077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.900437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.900465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.900817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.900831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.901162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.901176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.901538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.901551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.901947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.901962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.902284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.902297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.902491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.902507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.902894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.902908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.903274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.903290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.903629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.903645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.903869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.903883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.904245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.904259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.904587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.904603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.904963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.904977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.905179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.905193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.905393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.905408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.905780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.905796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.906133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.906147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.906510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.906527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.906864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.906879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.907223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.907237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.907479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.907495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.907803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.907819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.908146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.908161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.908495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.908513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.908870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.908884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.909247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.909262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.909624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.909641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.909982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.909999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.910431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.910445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.910771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.910785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.911144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.911157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.911521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.911535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.358 [2024-12-06 13:37:24.911871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.358 [2024-12-06 13:37:24.911884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.358 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.912243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.912257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.912492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.912505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.912717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.912729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.913065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.913078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.913277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.913291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.913624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.913638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.913976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.913989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.914374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.914387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.914655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.914670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.915060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.915073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.915431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.915444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.915804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.915817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.916150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.916163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.916500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.916515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.916899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.916912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.917134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.917148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.917343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.917355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.917555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.917569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.917927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.917940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.918293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.918306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.918646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.918660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.918862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.918876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.919292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.919305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.919637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.919650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.919992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.920005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.920351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.920363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.920698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.920711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.921049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.921062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.921431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.921444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.921683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.921696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.921883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.921899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.922249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.922263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.922583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.922597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.922958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.922972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.923196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.923208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.923512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.923526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.923743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.923757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.924087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.359 [2024-12-06 13:37:24.924100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.359 qpair failed and we were unable to recover it. 00:29:38.359 [2024-12-06 13:37:24.924504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.924518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.924882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.924895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.925125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.925138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.925482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.925497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.925847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.925860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.926070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.926085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.926298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.926311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.926625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.926638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.926972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.926985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.927337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.927350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.927557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.927571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.927835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.927847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.928245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.928258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.928587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.928600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.928928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.928942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.929278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.929292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.929506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.929520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.929858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.929870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.930232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.930245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.930645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.930659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.931017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.931029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.931382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.931394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.931840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.931854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.932253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.932267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.932613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.932627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.932982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.932994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.933359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.933372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.933569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.933581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.933884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.933897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.934247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.934260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.934614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.934627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.934960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.934973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.935332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.935348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.935699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.935713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.935930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.935944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.936305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.936318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.936630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.360 [2024-12-06 13:37:24.936644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.360 qpair failed and we were unable to recover it. 00:29:38.360 [2024-12-06 13:37:24.936834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.936847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.937209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.937221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.937573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.937586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.937773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.937786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.937975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.937988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.938298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.938311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.938631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.938645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.939006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.939020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.939385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.939399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.939597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.939612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.939793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.939807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.940206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.940220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.940563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.940577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.940780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.940792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.941019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.941032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.941405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.941418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.941681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.941695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.942016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.942029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.942243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.942255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.942609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.942623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.942956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.942970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.943327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.943340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.943679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.943693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.943890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.943902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.944202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.944215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.944560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.944574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.944911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.944924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.945268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.945280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.945614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.945628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.945988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.946002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.946349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.946362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.946679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.946693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.947021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.947033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.947353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.947366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.947723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.947736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.948057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.361 [2024-12-06 13:37:24.948073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.361 qpair failed and we were unable to recover it. 00:29:38.361 [2024-12-06 13:37:24.948420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.948432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.948777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.948790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.949126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.949139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.949495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.949511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.949965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.949980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.950337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.950350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.950707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.950721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.950914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.950925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.951242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.951254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.951489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.951502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.951901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.951913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.952255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.952268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.952602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.952616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.952977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.952990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.953357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.953370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.953699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.953712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.954049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.954063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.954416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.954429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.954664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.954679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.955032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.955046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.955399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.955411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.955747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.955761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.955976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.955988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.956289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.956301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.956619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.956632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.956833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.956846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.957215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.957229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.957448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.957468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.957835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.957849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.958208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.958221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.958409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.958422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.958781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.958795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.959126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.959138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.362 qpair failed and we were unable to recover it. 00:29:38.362 [2024-12-06 13:37:24.959371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.362 [2024-12-06 13:37:24.959384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.959742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.959756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.959907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.959920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.960113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.960127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.960464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.960478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.960830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.960844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.961217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.961232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.961436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.961448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.961679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.961691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.962007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.962021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.962222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.962235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.962571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.962584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.962806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.962817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.963123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.963137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.963337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.963349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.963678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.963692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.964031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.964046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.964248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.964261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.964605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.964619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.964975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.964987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.965364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.965377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.965758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.965773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.966128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.966141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.966339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.966352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.966654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.966668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.967013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.967027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.967226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.967240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.967567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.967581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.967765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.967778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.968088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.968100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.968462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.968478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.968787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.968800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.969116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.969128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.969466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.969480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.969687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.969699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.970043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.970056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.970275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.970287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.970617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.970631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.363 qpair failed and we were unable to recover it. 00:29:38.363 [2024-12-06 13:37:24.970976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.363 [2024-12-06 13:37:24.970991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.364 qpair failed and we were unable to recover it. 00:29:38.364 [2024-12-06 13:37:24.971391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.364 [2024-12-06 13:37:24.971404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.364 qpair failed and we were unable to recover it. 00:29:38.364 [2024-12-06 13:37:24.971810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.364 [2024-12-06 13:37:24.971824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.364 qpair failed and we were unable to recover it. 00:29:38.364 [2024-12-06 13:37:24.972147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.364 [2024-12-06 13:37:24.972159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.364 qpair failed and we were unable to recover it. 00:29:38.364 [2024-12-06 13:37:24.972354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.364 [2024-12-06 13:37:24.972368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.364 qpair failed and we were unable to recover it. 00:29:38.364 [2024-12-06 13:37:24.972677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.364 [2024-12-06 13:37:24.972690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.364 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.973047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.973064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.973249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.973264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.973597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.973614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.973927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.973940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.974294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.974307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.974683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.974695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.975045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.975058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.975393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.975408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.975773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.975786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.976121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.976137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.976491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.976505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.976812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.976824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.977163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.977175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.977352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.977364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.977678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.977692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.978024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.978037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.978357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.978370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.978731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.978746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.979112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.979125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.633 qpair failed and we were unable to recover it. 00:29:38.633 [2024-12-06 13:37:24.979467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.633 [2024-12-06 13:37:24.979482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.979678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.979691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.979906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.979918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.980249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.980263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.980592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.980605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.980957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.980970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.981319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.981333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.981635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.981648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.981950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.981963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.982277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.982289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.982506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.982519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.982873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.982886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.983220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.983234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.983462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.983476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.983825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.983839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.984185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.984198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.984381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.984394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.984760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.984774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.984961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.984973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.985322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.985335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.985732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.985749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.986110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.986123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.986467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.986481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.986801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.986819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.987158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.987171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.987472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.987485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.987811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.987824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.988013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.988025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.988229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.988241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.988560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.988574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.988988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.989005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.989206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.989222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.989575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.989592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.989923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.989941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.990304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.990319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.990679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.990693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.991071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.991084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.991444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.991466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.991814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.991828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.992155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.992168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.992508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.992522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.992849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.992863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.993187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.993200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.993348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.993360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.634 qpair failed and we were unable to recover it. 00:29:38.634 [2024-12-06 13:37:24.993700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.634 [2024-12-06 13:37:24.993713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.994070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.994083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.994419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.994433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.994890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.994904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.995237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.995250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.995569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.995583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.995940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.995953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.996312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.996325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.996693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.996707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.997020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.997032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.997376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.997389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.997723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.997739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.998072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.998084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.998424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.998437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.998657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.998670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.999037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.999050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.999384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.999399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:24.999764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:24.999778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.000118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.000133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.000354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.000371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.000597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.000611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.000970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.000983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.001328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.001344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.001677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.001691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.002037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.002053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.002253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.002267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.002540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.002553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.002907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.002921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.003274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.003287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.003327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:38.635 [2024-12-06 13:37:25.003612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.003626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.003992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.004006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.004365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.004379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.004742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.004755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.005155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.005168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.005494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.005510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.005724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.005738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.006079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.006092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.006416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.006429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.006844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.006858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.007212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.007228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.007283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.007299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.007578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.007594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.007962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.007979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.008334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.008348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.635 qpair failed and we were unable to recover it. 00:29:38.635 [2024-12-06 13:37:25.008692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.635 [2024-12-06 13:37:25.008708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.009050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.009063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.009422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.009437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.009815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.009829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.010145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.010161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.010554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.010569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.010913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.010929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.011300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.011313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.011506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.011520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.011828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.011841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.012211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.012227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.012465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.012479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.012814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.012830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.013184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.013197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.013533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.013549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.013935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.013951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.014169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.014181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.014514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.014527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.014891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.014907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.015331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.015347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.015673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.015686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.016046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.016060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.016415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.016430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.016738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.016751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.017128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.017142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.017489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.017503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.017724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.017737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.018092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.018107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.018512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.018526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.018855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.018867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.019251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.019265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.019641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.019655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.019994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.020008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.020346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.020362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.020687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.020701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.020909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.020921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.021187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.021202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.021470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.021483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.021789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.021806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.022147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.022161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.022524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.022538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.022745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.022758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.023083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.023096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.023318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.023330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.023643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.023657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.024000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.024013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.636 [2024-12-06 13:37:25.024334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.636 [2024-12-06 13:37:25.024348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.636 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.024584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.024597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.024922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.024935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.025283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.025297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.025624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.025637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.025974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.025987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.026367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.026380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.026593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.026605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.026937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.026951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.027265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.027282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.027617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.027630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.027995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.028007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.028329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.028342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.028673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.028686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.029030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.029043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.029409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.029421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.029738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.029751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.030096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.030108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.030464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.030477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.030809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.030821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.031167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.031180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.031517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.031530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.031858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.031871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.032071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.032083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.032230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.032242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.032539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.032554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.032903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.032917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.033261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.033274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.033641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.033655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.033976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.033990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.034299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.034312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.034633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.034645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.034947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.034960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.035315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.035328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.035542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.035556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.035890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.035902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.036256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.036269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.036629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.036643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.037040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.037053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.037384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.037396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.037643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.037657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.037831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.037844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.637 [2024-12-06 13:37:25.038141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.637 [2024-12-06 13:37:25.038156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.637 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.038500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.038514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.038837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.038850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.039202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.039215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.039531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.039544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.039902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.039915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.040267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.040279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.040628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.040644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.040968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.040981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.041325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.041338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.041535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.041549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.041750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.041763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.042108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.042121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.042446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.042469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.042675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.042688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.043003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.043017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.043349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.043363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.043703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.043716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.044059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.044073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.044414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.044427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.044756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.044769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.044969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.044982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.045185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.045197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.045527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.045541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.045852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.045864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.046208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.046222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.046556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.046570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.046784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.046797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.047011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.047026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.047393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.047406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.047643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.047655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.047997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.048011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.048361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.048375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.048763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.048777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.049128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.049143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.049361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.049375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.049718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.049732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.050056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.050070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.050417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.050431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.050750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.050763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.051093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.051107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.051469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.051485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.051837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.051850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.052180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.052196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.052550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.052563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.052896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.052908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.053219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.053232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.053581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.638 [2024-12-06 13:37:25.053598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.638 qpair failed and we were unable to recover it. 00:29:38.638 [2024-12-06 13:37:25.053941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.053954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.054319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.054332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.054683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.054697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.054905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.054917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.055242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.055257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.055568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.055581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.055895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.055907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.055943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.639 [2024-12-06 13:37:25.055990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.639 [2024-12-06 13:37:25.055999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.639 [2024-12-06 13:37:25.056007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.639 [2024-12-06 13:37:25.056013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.639 [2024-12-06 13:37:25.056258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.056272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.056636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.056648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.056976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.056988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.057341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.057356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.057705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.057718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.058034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.058047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.058042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:38.639 [2024-12-06 13:37:25.058203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:38.639 [2024-12-06 13:37:25.058323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.058294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:38.639 [2024-12-06 13:37:25.058339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.058293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:38.639 [2024-12-06 13:37:25.058577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.058590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.058939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.058954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.059308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.059321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.059714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.059727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.060041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.060054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.060271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.060285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.060598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.060611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.060832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.060843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.061064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.061076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.061361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.061376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.061639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.061653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.061849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.061863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.062215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.062230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.062561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.062576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.062915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.062929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.063255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.063269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.063580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.063593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.063818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.063829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.064165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.064178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.064520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.064533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.064899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.064911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.065244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.065258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.065615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.065628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.065956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.065968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.066279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.066292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.066645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.066660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.066968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.066981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.067195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.067207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.067514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.067527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.067826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.067839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.068155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.068169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.068376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.068388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.068485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.639 [2024-12-06 13:37:25.068495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.639 qpair failed and we were unable to recover it. 00:29:38.639 [2024-12-06 13:37:25.068849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.068861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.069079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.069091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.069437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.069475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.069820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.069832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.070026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.070039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.070405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.070418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.070771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.070784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.071013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.071026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.071219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.071230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.071582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.071596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.071953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.071966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.072298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.072312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.072622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.072636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.072971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.072985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.073294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.073307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.073637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.073651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.073845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.073858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.074210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.074224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.074572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.074586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.074923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.074938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.075041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.075055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.075239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.075254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.075462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.075477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.075818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.075830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.076149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.076162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.076497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.076510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.076862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.076876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.077197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.077211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.077530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.077543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.077887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.077900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.078204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.078218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.078536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.078549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.078742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.078756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.079098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.079111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.079304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.079317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.079489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.079501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.079846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.079859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.080058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.080071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.080385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.080398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.080718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.080733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.080926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.080940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.081286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.081301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.081607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.081624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.081940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.081954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.082308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.082321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.082630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.640 [2024-12-06 13:37:25.082643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.640 qpair failed and we were unable to recover it. 00:29:38.640 [2024-12-06 13:37:25.082957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.082970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.083290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.083302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.083651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.083665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.084009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.084022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.084375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.084387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.084717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.084730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.085039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.085052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.085397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.085410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.085798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.085811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.086166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.086178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.086354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.086367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.086589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.086602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.086921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.086934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.087268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.087281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.087515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.087528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.087698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.087711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.087903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.087917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.088099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.088111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.088420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.088435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.088761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.088774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.089128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.089142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.089483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.089497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.089793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.089808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.090145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.090159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.090477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.090492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.090699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.090715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.091078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.091091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.091299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.091313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.091623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.091638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.091846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.091862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.092182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.092198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.092540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.092555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.092848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.092863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.093200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.093215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.093564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.093579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.093919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.093935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.094281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.094300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.094532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.094548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.094741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.094754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.095102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.095117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.095462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.095478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.095665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.095680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.096024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.096038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.096348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.096363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.096714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.096728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.097075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.097091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.097298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.097314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.097596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.097610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.097951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.097966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.098320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.098333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.641 [2024-12-06 13:37:25.098522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.641 [2024-12-06 13:37:25.098538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.641 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.098892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.098907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.099081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.099095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.099295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.099308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.099501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.099515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.099867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.099881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.100207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.100221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.100532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.100546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.100896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.100910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.101113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.101125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.101340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.101353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.101528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.101541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.101856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.101870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.102232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.102248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.102573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.102586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.102768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.102781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.103116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.103129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.103481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.103497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.103838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.103853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.104202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.104215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.104538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.104552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.104741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.104754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.105161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.105176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.105362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.105377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.105581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.105594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.105968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.105982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.106297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.106311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.106615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.106631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.106988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.107003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.107307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.107322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.107508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.107521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.107883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.107898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.108223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.108237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.108547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.108562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.108912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.108927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.109279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.109294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.109483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.109495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.109852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.109866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.110127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.110141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.110330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.110344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.110647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.110663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.111024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.111038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.111356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.111370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.111568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.111581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.111927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.111942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.112143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.112156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.112343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.112356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.112562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.112577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.112793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.112808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.113112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.113126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.113464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.113479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.642 qpair failed and we were unable to recover it. 00:29:38.642 [2024-12-06 13:37:25.113758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.642 [2024-12-06 13:37:25.113773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.113954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.113966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.114274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.114291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.114626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.114640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.114992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.115007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.115327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.115340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.115665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.115679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.116010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.116025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.116375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.116390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.116718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.116731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.117042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.117054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.117376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.117390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.117747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.117761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.118043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.118057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.118352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.118369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.118563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.118575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.118632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.118645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.118991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.119006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.119352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.119367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.119686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.119700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.120003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.120017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.120371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.120386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.120737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.120751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.121072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.121085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.121431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.121446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.121791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.121807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.122150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.122163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.122486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.122499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.122845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.122860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.123170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.123184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.123487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.123500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.123838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.123852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.124159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.124173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.124368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.124382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.124744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.124760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.125106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.125119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.125427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.125440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.125804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.125818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.126170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.126185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.126507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.126520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.126854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.126867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.127066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.127079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.127384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.127400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.127717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.127732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.128095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.128108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.128424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.128438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.128795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.128810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.129130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.129144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.129505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.129519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.129709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.129723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.130078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.130091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.130414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.130428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.130780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.643 [2024-12-06 13:37:25.130793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.643 qpair failed and we were unable to recover it. 00:29:38.643 [2024-12-06 13:37:25.131134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.131149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.131332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.131346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.131475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.131491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.131789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.131803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.132150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.132164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.132521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.132536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.132868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.132880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.133202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.133214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.133393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.133407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.133753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.133775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.134120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.134134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.134333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.134344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.134554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.134568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.134919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.134931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.135128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.135140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.135341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.135354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.135669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.135681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.135993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.136005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.136339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.136352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.136669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.136683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.137015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.137028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.137336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.137348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.137564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.137578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.137881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.137893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.138212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.138226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.138535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.138548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.138885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.138897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.139251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.139264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.139581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.139594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.139915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.139931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.140265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.140279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.140471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.140485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.140874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.140887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.141103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.141115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.141460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.141474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.141859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.141873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.142187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.142199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.142505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.142518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.142857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.142873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.143173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.143186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.143520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.143534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.143717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.143732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.143934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.143948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.144278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.144292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.144506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.144520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.144813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.144826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.145181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.145195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.145553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.145566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.145892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.145907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.644 [2024-12-06 13:37:25.146111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.644 [2024-12-06 13:37:25.146125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.644 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.146318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.146331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.146392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.146404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.146587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.146602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.146944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.146959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.147264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.147279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.147472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.147486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.147786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.147800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.147983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.147997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.148296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.148311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.148496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.148509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.148876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.148891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.149239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.149251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.149600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.149613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.149787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.149802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.150003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.150017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.150332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.150348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.150532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.150548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.150892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.150905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.151091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.151105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.151507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.151527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.151711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.151725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.152072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.152086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.152442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.152462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.152762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.152775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.153137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.153150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.153474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.153489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.153829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.153843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.154170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.154184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.154545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.154558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.154900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.154914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.155209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.155221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.155546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.155561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.155779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.155794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.156085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.156101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.156402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.156414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.156738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.156752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.156975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.156989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.157178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.157192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.157516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.157529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.157741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.157753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.158086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.158102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.158286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.158298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.158656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.158670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.159017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.159030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.159387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.159400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.159590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.159605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.159782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.159798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.160124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.160138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.160451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.160474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.160668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.160682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.160991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.161005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.161346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.161360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.161668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.161684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.645 [2024-12-06 13:37:25.161889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.645 [2024-12-06 13:37:25.161903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.645 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.162261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.162278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.162619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.162634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.162987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.163002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.163322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.163337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.163667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.163682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.163876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.163893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.164197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.164210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.164392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.164405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.164745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.164761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.165098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.165111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.165426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.165439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.165657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.165671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.165997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.166012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.166367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.166381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.166734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.166748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.167067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.167079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.167268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.167284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.167630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.167645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.167956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.167970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.168301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.168316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.168645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.168660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.168983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.168996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.169300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.169315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.169632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.169647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.169859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.169874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.170214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.170229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.170584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.170598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.170796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.170807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.171166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.171176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.171498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.171508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.171844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.171855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.172055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.172064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.172365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.172376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.172716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.172728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.172926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.172938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.173112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.173124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.173477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.173489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.173823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.173836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.174189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.174201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.174536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.174552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.174984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.174997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.175323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.175337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.175673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.175691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.176039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.176055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.176421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.176435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.176789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.176809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.177124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.177139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.177466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.177482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.177841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.177857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.178210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.178225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.178545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.178559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.178913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.178928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.179134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.179151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.646 qpair failed and we were unable to recover it. 00:29:38.646 [2024-12-06 13:37:25.179444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.646 [2024-12-06 13:37:25.179467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.179812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.179828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.180179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.180194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.180427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.180441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.180763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.180780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.181140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.181161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.181507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.181525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.181715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.181729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.182076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.182093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.182282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.182298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.182632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.182646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.182961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.182977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.183177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.183192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.183521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.183538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.183756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.183770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.184074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.184089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.184435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.184451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.184633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.184650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.185000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.185014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.185209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.185226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.185581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.185599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.185951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.185965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.186290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.186304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.186652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.186668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.187009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.187023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.187381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.187395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.187743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.187759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.188107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.188122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.188303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.188317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.188395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.188407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.188729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.188743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.189048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.189061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.189408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.189424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.189787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.189802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.190124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.190138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.190199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.190213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.190379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.190394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.190732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.190747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.191069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.191083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.191428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.191443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.191630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.191644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.191993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.192007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.192359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.192374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.192571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.192586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.192888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.192903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.193241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.193255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.193584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.193597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.193947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.193959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.194158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.194172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.194357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.194370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.194569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.194582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.194793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.647 [2024-12-06 13:37:25.194806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.647 qpair failed and we were unable to recover it. 00:29:38.647 [2024-12-06 13:37:25.195141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.195155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.195481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.195493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.195678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.195693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.195882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.195893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.196242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.196255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.196471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.196484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.196796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.196809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.197144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.197157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.197468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.197481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.197825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.197839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.198192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.198205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.198525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.198538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.198820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.198833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.199020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.199032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.199339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.199352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.199720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.199733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.200047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.200060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.200286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.200300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.200650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.200664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.200856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.200868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.201176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.201192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.201461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.201476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.201829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.201842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.202161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.202175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.202486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.202499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.202871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.202888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.203198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.203211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.203561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.203573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.203880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.203892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.204236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.204249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.204448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.204466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.204807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.204819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.205177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.205190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.205541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.205554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.205949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.205962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.206274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.206286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.206483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.206496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.206811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.206824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.207026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.207039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.207329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.207343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.207540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.207555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.207870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.207882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.208255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.208268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.208593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.208606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.208996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.209009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.209330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.209343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.209664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.209677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.209886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.209899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.210225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.210238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.210426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.210438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.210770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.210784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.211124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.211138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.211438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.211452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.211766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.211780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.211971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.211983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.212314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.212328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.648 [2024-12-06 13:37:25.212634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.648 [2024-12-06 13:37:25.212648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.648 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.212970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.212982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.213291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.213304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.213632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.213645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.213967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.213983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.214335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.214347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.214664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.214677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.215012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.215025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.215376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.215389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.215717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.215730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.216046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.216059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.216387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.216400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.216599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.216613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.216798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.216811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.217010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.217023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.217206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.217216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.217411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.217424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.217767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.217782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.218133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.218146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.218504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.218517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.218871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.218884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.219238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.219251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.219572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.219586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.219901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.219914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.220263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.220276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.220633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.220647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.220989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.221002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.221308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.221320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.221626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.221639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.221962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.221974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.222292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.222306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.222664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.222679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.223034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.223048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.223401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.223415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.223810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.223823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.224176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.224190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.224419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.224432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.224775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.224790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.225141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.225155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.225467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.225479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.225833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.225846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.226038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.226050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.226379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.226392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.226578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.226590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.226942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.226958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.227158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.227171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.227508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.227522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.227831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.227843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.228199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.228212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.228521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.228534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.228829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.228841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.229180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.229193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.229422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.229436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.229798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.229813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.230100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.230115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.230473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.230488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.230812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.230825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.231128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.231140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.231484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.231497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.231720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.231732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.231942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.649 [2024-12-06 13:37:25.231955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.649 qpair failed and we were unable to recover it. 00:29:38.649 [2024-12-06 13:37:25.232268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.232281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.232632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.232645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.232953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.232966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.233309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.233322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.233683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.233696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.234022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.234035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.234390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.234402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.234732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.234748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.234942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.234957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.235299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.235311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.235528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.235542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.235787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.235801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.236107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.236120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.236463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.236475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.236824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.236838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.237159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.237174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.237471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.237484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.237807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.237819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.238136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.238149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.238498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.238511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.238869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.238882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.239205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.239220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.239533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.239546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.239836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.239851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.240202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.240216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.240538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.240552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.240893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.240905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.241227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.241241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.241435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.241448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.241779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.241793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.241993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.242005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.242194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.242205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.242398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.242411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.242610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.242625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.242949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.242962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.243163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.243175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.243504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.243517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.243888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.243901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.244209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.244221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.244562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.244574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.244898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.244910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.245231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.245243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.245606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.245619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.245946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.245958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.246134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.246147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.246348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.246361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.246696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.246708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.247056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.247070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.247311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.247326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.247512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.247528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.247744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.247757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.247933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.247945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.248141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.248154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.248344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.248359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.248668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.650 [2024-12-06 13:37:25.248683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.650 qpair failed and we were unable to recover it. 00:29:38.650 [2024-12-06 13:37:25.249033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.249048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.249398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.249413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.249736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.249751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.250061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.250075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.250422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.250436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.250747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.250763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.250986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.251000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.251324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.251339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.251695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.251713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.252094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.252108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.252466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.252480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.252801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.252814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.253101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.253114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.253303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.253315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.253676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.253690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.254075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.254088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.254422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.254435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.254796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.254809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.255131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.255144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.255450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.255474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.255826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.255838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.256168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.256181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.256359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.256369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.256701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.256717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.257064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.257078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.257391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.257403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.257725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.257739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.258094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.258107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.258428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.258441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.258765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.258779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.259130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.259143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.259443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.259463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.259799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.259813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.260007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.260019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.260368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.260382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.260589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.260606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.260966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.260980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.261291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.261304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.261627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.261641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.262001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.262014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.262346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.262360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.262718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.262731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.263056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.263071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.263430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.263443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.263763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.263775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.263987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.263999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.264172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.264185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.264505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.264519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.264862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.264875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.265199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.265212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.265540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.265553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.265850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.265863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.266224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.266235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.266544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.266557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.266765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.266778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.267089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.267103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.267452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.267474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.267814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.267827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.268178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.268191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.268541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.268555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.268896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.651 [2024-12-06 13:37:25.268909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.651 qpair failed and we were unable to recover it. 00:29:38.651 [2024-12-06 13:37:25.269114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.269128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.269321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.269333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.269649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.269663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.270015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.270028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.270389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.270405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.270723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.270736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.271088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.271103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.271447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.271468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.271800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.271814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.272138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.272153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.272479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.272493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.272800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.272813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.273162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.273176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.273516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.273530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.273842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.273858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.274241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.274254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.274464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.274477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.274835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.274849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.275154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.275167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.275490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.275504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.275854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.275869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.276033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.276046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.276359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.276373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.276721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.276736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.277090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.277108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.277452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.277489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.277673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.277687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.278006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.278018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.278376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.278389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.278470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.278480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.278576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.278588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.278941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.278956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.279153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.279167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.279503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.279515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.279840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.279855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.280204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.280218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.280538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.280551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.280911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.280926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.281267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.281280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.281604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.281617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.652 [2024-12-06 13:37:25.281966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.652 [2024-12-06 13:37:25.281979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.652 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.282178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.282193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.282531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.282548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.282866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.282878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.283200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.283213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.283564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.283576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.283750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.283764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.284120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.284134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.284451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.284475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.284806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.284821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.285143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.285157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.285338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.285352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.285572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.285584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.285768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.285782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.285974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.285991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.286334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.286347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.286711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.286727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.926 [2024-12-06 13:37:25.287044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.926 [2024-12-06 13:37:25.287059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.926 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.287389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.287403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.287644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.287659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.287893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.287907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.288094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.288108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.288418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.288431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.288610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.288626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.288962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.288975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.289156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.289171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.289543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.289557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.289920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.289934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.290288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.290302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.290722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.290736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.290930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.290942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.291083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.291095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.291440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.291464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.291664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.291676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.291998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.292011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.292403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.292417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.292733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.292746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.293088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.293101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.293289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.293301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.293669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.293682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.294034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.294049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.294402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.294414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.294746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.294760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.294957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.294972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.295259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.295272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.295479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.295493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.295839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.295852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.296206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.296220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.296545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.296558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.296758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.296771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.297111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.297124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.297471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.927 [2024-12-06 13:37:25.297485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.927 qpair failed and we were unable to recover it. 00:29:38.927 [2024-12-06 13:37:25.297667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.297680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.298001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.298014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.298313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.298329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.298642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.298656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.299006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.299020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.299387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.299400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.299714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.299728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.299933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.299947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.300124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.300140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.300493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.300506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.300797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.300810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.301167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.301181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.301508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.301523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.301884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.301898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.302219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.302233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.302550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.302562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.302911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.302924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.303274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.303289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.303346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.303360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.303561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.303574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.303930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.303943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.304264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.304279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.304624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.304638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.304972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.304987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.305177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.305192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.305525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.305538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.305770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.305783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.306124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.306137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.306449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.306468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.306822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.306836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.928 qpair failed and we were unable to recover it. 00:29:38.928 [2024-12-06 13:37:25.307121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.928 [2024-12-06 13:37:25.307134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.307464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.307480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.307786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.307799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.308155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.308171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.308352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.308364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.308722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.308735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.309082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.309097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.309421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.309433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.309765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.309779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.310137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.310152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.310346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.310361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.310561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.310574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.310917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.310933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.311274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.311287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.311652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.311668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.312010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.312024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.312254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.312266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.312589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.312604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.312932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.312946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.313304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.313320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.313525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.313538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.313890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.313903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.314102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.314116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.314467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.314480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.314825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.314839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.315178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.315193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.315498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.315513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.315842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.315855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.316180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.316193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.316552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.316567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.316926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.316939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.317131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.317142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.317449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.317469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.317774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.317787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.318138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.318152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.318478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.929 [2024-12-06 13:37:25.318491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.929 qpair failed and we were unable to recover it. 00:29:38.929 [2024-12-06 13:37:25.318840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.318853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.319164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.319177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.319529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.319543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.319866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.319881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.320225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.320240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.320593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.320609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.320782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.320794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.321093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.321108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.321401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.321415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.321606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.321621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.321932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.321944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.322268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.322283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.322671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.322684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.323024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.323038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.323394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.323407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.323725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.323738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.324088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.324104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.324422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.324437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.324783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.324796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.324994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.325007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.325351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.325365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.325692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.325707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.326057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.326069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.326392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.326406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.326751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.326765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.326950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.326962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.327316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.327329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.327681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.327696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.327893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.327909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.328269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.328284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.328578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.328592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.328652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.328662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.328916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.930 [2024-12-06 13:37:25.328929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.930 qpair failed and we were unable to recover it. 00:29:38.930 [2024-12-06 13:37:25.329136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.329150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.329495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.329508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.329868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.329883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.330232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.330245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.330584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.330600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.330949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.330962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.331152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.331164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.331365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.331377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.331685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.331699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.332045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.332059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.332417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.332431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.332628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.332641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.332819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.332834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.333136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.333151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.333440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.333460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.333817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.333830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.334170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.334185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.334380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.334394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.334574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.334589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.334775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.334787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.335137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.335151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.335515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.335529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.335887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.335899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.336221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.336236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.336592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.336608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.336956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.336969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.337288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.931 [2024-12-06 13:37:25.337300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.931 qpair failed and we were unable to recover it. 00:29:38.931 [2024-12-06 13:37:25.337635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.337649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.337985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.337999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.338326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.338339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.338665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.338680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.339002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.339015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.339360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.339374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.339689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.339702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.340060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.340076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.340418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.340434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.340793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.340810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.341153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.341168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.341472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.341487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.341826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.341840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.342186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.342200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.342550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.342566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.342760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.342774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.343127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.343141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.343443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.343466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.343809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.343823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.344139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.344154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.344500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.344515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.344914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.344927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.345253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.345267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.345615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.345630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.345915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.345929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.346130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.346143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.346437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.346450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.346794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.346811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.347152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.347166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.347511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.347528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.347869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.347883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.348243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.348259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.348472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.348488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.348792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.348805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.349126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.349141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.349343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.349356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.349414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.932 [2024-12-06 13:37:25.349428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.932 qpair failed and we were unable to recover it. 00:29:38.932 [2024-12-06 13:37:25.349759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.349772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.350105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.350118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.350468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.350481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.350813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.350827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.351010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.351022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.351352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.351368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.351709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.351724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.352047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.352060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.352244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.352257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.352466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.352484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.352678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.352691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.352881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.352893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.353203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.353217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.353398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.353412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.353771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.353784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.354103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.354117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.354462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.354477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.354820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.354834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.355183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.355198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.355533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.355547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.355882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.355895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.356091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.356105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.356435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.356450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.356507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.356522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.356808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.356821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.357139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.357153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.357499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.357515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.357686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.357700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.358006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.358019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.358405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.358418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.358747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.358761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.359089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.359103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.359418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.359430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.359757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.359773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.360108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.360123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.360444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.360469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.360766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.360780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.933 [2024-12-06 13:37:25.361139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.933 [2024-12-06 13:37:25.361153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.933 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.361370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.361385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.361668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.361687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.361987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.362001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.362189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.362202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.362524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.362539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.362851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.362864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.363212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.363228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.363554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.363567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.363921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.363935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.364278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.364292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.364650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.364666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.364993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.365010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.365342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.365356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.365707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.365723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.366081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.366098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.366444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.366468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.366648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.366663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.366894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.366906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.367207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.367221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.367421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.367435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.367745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.367760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.368107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.368120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.368432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.368447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.368805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.368819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.369140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.369154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.369530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.369545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.369877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.369891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.370075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.370089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.370430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.370443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.370638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.370652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.370966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.370979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.371324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.371338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.371702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.371717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.372041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.372057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.372251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.372266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.372624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.372637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.372927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.372945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.934 [2024-12-06 13:37:25.373127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.934 [2024-12-06 13:37:25.373143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.934 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.373489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.373508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.373852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.373865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.374215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.374228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.374570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.374586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.374947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.374962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.375342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.375355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.375719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.375733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.376088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.376102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.376285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.376298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.376544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.376559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.376909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.376923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.377297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.377309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.377663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.377676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.378003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.378017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.378388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.378404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.378699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.378713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.379018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.379034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.379379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.379393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.379697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.379713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.380012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.380025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.380354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.380367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.380701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.380715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.381030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.381044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.381365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.381377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.381688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.381702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.382005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.382018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.382203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.382214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.382396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.382410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.382773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.382788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.383137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.383150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.383511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.383524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.383875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.383890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.384095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.384108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.384310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.384323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.384647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.384663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.384994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.385008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.385358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.385370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.935 [2024-12-06 13:37:25.385713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.935 [2024-12-06 13:37:25.385727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.935 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.386048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.386062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.386434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.386448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.386770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.386786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.387080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.387094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.387411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.387426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.387603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.387624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.387950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.387963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.388268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.388281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.388617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.388630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.388815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.388828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.389176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.389191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.389539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.389553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.389611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.389621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.389900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.389913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.390238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.390250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.390575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.390590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.390931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.390945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.391243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.391257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.391476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.391489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.391829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.391846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.392196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.392209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.392305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.392315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.392630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.392646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.392978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.392993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.393316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.393329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.393639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.393653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.394003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.394017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.394198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.394212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.936 [2024-12-06 13:37:25.394521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.936 [2024-12-06 13:37:25.394534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.936 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.394870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.394883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.395089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.395103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.395458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.395471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.395841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.395855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.396172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.396186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.396525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.396539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.396774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.396788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.397141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.397155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.397479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.397493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.397845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.397858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.398181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.398195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.398506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.398519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.398838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.398850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.399010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.399024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.399410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.399422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.399742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.399755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.400044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.400060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.400401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.400415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.400734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.400748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.401106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.401119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.401440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.401460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.401772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.401785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.402104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.402117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.402471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.402485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.402836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.402847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.403042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.403057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.403352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.403365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.403579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.403594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.403894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.403906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.404239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.404252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.404578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.404591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.404924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.404936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.405268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.405282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.405613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.405626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.405814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.405826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.406188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.406200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.406550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.406563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.937 [2024-12-06 13:37:25.406915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.937 [2024-12-06 13:37:25.406929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.937 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.407325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.407340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.407667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.407679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.408020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.408032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.408324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.408337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.408692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.408705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.409062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.409075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.409270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.409285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.409463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.409477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.409820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.409833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.410186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.410198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.410526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.410561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.410914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.410927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.411104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.411117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.411423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.411435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.411748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.411761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.412043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.412056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.412396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.412410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.412738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.412751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.413073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.413088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.413418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.413430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.413723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.413736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.414086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.414100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.414295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.414309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.414624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.414638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.414826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.414840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.415173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.415186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.415378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.415391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.415687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.415700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.416028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.416041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.416415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.416429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.416626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.416640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.416985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.416998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.417229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.417243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.417599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.417612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.417963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.417976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.418320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.418333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.418692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.418706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.938 qpair failed and we were unable to recover it. 00:29:38.938 [2024-12-06 13:37:25.419052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.938 [2024-12-06 13:37:25.419065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.419392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.419405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.419614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.419628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.419981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.419994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.420226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.420239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.420568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.420581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.420927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.420941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.421292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.421305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.421670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.421683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.422024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.422036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.422194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.422206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.422529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.422542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.422887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.422901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.423227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.423241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.423578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.423591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.423768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.423781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.424138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.424150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.424333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.424348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.424704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.424718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.425056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.425068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.425123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.425133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.425418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.425432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.425636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.425652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.425985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.425996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.426347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.426359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.426678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.426691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.427013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.427026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.427360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.427373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.427723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.427737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.428098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.428111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.428442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.428461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.428801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.428813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.429165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.429178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.429540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.429553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.429757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.429771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.430071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.430084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.430424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.430437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.430754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.430767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.939 qpair failed and we were unable to recover it. 00:29:38.939 [2024-12-06 13:37:25.431124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.939 [2024-12-06 13:37:25.431136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.431346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.431359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.431664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.431677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.431998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.432010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.432207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.432221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.432420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.432435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.432770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.432783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.433150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.433163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.433303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.433315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.433637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.433649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.433942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.433958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.434149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.434164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.434393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.434407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.434581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.434593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.434935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.434947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.435311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.435324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.435661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.435674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.435973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.435985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.436296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.436308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.436365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.436374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.436638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.436652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.437000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.437013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.437332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.437345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.437682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.437695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.438006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.438020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.438341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.438353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.438684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.438697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.438878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.438891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.439213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.439227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.439559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.439573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.439959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.439971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.440293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.440307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.440632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.440645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.440870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.440882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.441179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.441193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.441398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.940 [2024-12-06 13:37:25.441411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.940 qpair failed and we were unable to recover it. 00:29:38.940 [2024-12-06 13:37:25.441717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.441731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.442080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.442093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.442411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.442423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.442751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.442764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.443116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.443130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.443466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.443479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.443801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.443813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.444159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.444172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.444530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.444544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.444731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.444744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.445090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.445105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.445297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.445311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.445368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.445379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.445695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.445709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.445892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.445909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.446108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.446120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.446451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.446470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.446803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.446816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.447015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.447028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.447368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.447380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.447726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.447739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.448102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.448117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.448435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.448448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.448663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.448676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.448860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.448872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.449189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.449201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.449482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.449495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.449677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.449690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.449874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.449887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.450244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.450259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.450602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.450615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.450975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.450987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.451304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.451317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.451509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.451523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.451838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.451850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.452054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.452068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.941 [2024-12-06 13:37:25.452363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.941 [2024-12-06 13:37:25.452375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.941 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.452564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.452577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.452764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.452777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.452990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.453002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.453333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.453345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.453666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.453679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.454021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.454034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.454391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.454405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.454766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.454779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.455110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.455123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.455444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.455463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.455799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.455812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.456135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.456147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.456466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.456479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.456821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.456835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.457177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.457190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.457532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.457545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.457871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.457883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.458220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.458276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.458607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.458621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.458958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.458971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.459162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.459176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.459528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.459541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.459865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.459877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.460228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.460240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.460450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.460469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.460758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.460772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.460999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.461012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.461353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.461367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.461572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.461587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.461945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.461957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.462280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.462292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.462584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.462597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.462891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.462903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.463095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.463108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.463461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.463476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.463663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.463677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.463868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.463881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.942 [2024-12-06 13:37:25.464225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.942 [2024-12-06 13:37:25.464237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.942 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.464415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.464429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.464626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.464638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.464951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.464963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.465287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.465301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.465637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.465649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.465997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.466011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.466243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.466257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.466579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.466592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.466896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.466909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.467228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.467241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.467562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.467575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.467917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.467930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.468247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.468262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.468464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.468477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.468663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.468677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.468884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.468898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.469240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.469252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.469576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.469589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.469953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.469966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.470290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.470310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.470628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.470641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.470841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.470855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.471154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.471167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.471363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.471377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.471582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.471595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.471908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.471920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.472232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.472245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.472556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.472569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.472926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.472939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.473288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.473301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.473620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.473633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.473979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.473992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.474339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.474352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.474700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.474715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.475067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.475080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.475279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.475292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.475576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.475590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.943 [2024-12-06 13:37:25.475900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.943 [2024-12-06 13:37:25.475912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.943 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.476221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.476234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.476582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.476596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.476944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.476956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.477315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.477328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.477676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.477690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.477923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.477935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.478278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.478291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.478639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.478653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.478844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.478857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.479206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.479220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.479560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.479573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.479925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.479938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.480290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.480302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.480637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.480650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.480989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.481002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.481200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.481214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.481531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.481544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.481887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.481900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.482220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.482232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.482436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.482448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.482753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.482766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.483090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.483105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.483286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.483300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.483498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.483511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.483715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.483727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.484071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.484085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.484403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.484417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.484739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.484753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.484926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.484940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.485263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.485277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.485625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.485639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.485926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.485938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.486284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.486298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.486506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.486520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.486852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.486865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.487195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.487209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.487556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.487569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.944 [2024-12-06 13:37:25.487917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.944 [2024-12-06 13:37:25.487930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.944 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.488275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.488289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.488629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.488642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.489002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.489014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.489338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.489350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.489541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.489555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.489913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.489926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.490109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.490122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.490364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.490379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.490573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.490586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.490933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.490945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.491307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.491320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.491635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.491648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.491980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.491993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.492307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.492320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.492682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.492696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.493011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.493026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.493349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.493362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.493568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.493581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.493881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.493894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.494184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.494196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.494472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.494487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.494808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.494820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.495160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.495174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.495523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.495540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.495889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.495901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.496221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.496234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.496595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.496609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.496960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.496974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.497302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.497316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.497671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.497684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.497886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.497901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.498085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.498098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.498441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.498475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.498683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.945 [2024-12-06 13:37:25.498696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.945 qpair failed and we were unable to recover it. 00:29:38.945 [2024-12-06 13:37:25.499001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.499015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.499209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.499225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.499549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.499562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.499913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.499926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.500277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.500289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.500617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.500630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.500959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.500973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.501166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.501180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.501512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.501525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.501704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.501716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.501907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.501920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.502101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.502115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.502309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.502321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.502662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.502675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.503017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.503029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.503210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.503225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.503570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.503583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.503883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.503896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.504218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.504232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.504560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.504573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.504770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.504784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.504976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.504988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.505309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.505322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.505376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.505386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.505698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.505712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.505854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.505865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.506204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.506217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.506541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.506555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.506883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.506896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.507254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.507269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.507597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.507610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.507965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.507978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.508319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.508332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.508663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.508676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.508994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.509006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.509353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.509366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.509722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.509736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.509939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.946 [2024-12-06 13:37:25.509952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.946 qpair failed and we were unable to recover it. 00:29:38.946 [2024-12-06 13:37:25.510280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.510292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.510492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.510507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.510690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.510703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.510892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.510906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.511247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.511261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.511608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.511621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.511970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.511982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.512303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.512316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.512674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.512687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.513003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.513017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.513364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.513377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.513696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.513710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.514037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.514050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.514345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.514359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.514750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.514764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.515110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.515122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.515464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.515478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.515682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.515697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.515744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b47e10 (9): Bad file descriptor 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Write completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Write completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Write completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Write completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Write completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Write completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Write completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Write completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Write completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Write completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Read completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 Write completed with error (sct=0, sc=8) 00:29:38.947 starting I/O failed 00:29:38.947 [2024-12-06 13:37:25.516779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.947 [2024-12-06 13:37:25.517285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.517345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.517696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.517712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.518077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.518088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.518405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.518415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.518742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.518754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.519080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.519091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.519285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.519297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.519740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.519799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.520002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.947 [2024-12-06 13:37:25.520016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.947 qpair failed and we were unable to recover it. 00:29:38.947 [2024-12-06 13:37:25.520385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.520396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.520719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.520731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.521024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.521035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.521383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.521393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.521751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.521763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.522093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.522103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.522452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.522469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.522772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.522783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.523109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.523119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.523471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.523481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.523789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.523799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.524155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.524167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.524470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.524482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.524816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.524827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.525155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.525165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.525523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.525534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.525886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.525897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.526189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.526199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.526531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.526543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.526742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.526753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.527105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.527116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.527441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.527452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.527777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.527789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.528142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.528153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.528483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.528497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.528674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.528687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.529024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.529033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.529339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.529352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.529529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.529540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.529744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.529753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.529952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.529963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.530300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.530310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.530502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.530515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.530725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.530735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.530956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.530966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.531324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.531335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.531646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.531656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.531979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.948 [2024-12-06 13:37:25.531989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-12-06 13:37:25.532349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.532359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.532677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.532688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.533015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.533027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.533331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.533344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.533741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.533753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.533918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.533928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.534244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.534254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.534461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.534473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.534780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.534790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.534961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.534971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.535323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.535333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.535688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.535700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.536004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.536017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.536326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.536340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.536682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.536694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.537053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.537065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.537236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.537245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.537578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.537589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.537802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.537813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.538141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.538153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.538475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.538485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.538787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.538797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.539103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.539114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.539446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.539460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.539776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.539787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.539975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.539987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.540333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.540345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.540630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.540642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.540864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.540876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.541202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.541212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.541378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.541387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.541749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.541760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.542078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.542088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.542280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.542291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.542581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.542595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.542918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.542933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.543243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.543254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.543547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.543557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.949 [2024-12-06 13:37:25.543867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.949 [2024-12-06 13:37:25.543877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.949 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.544201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.544213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.544529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.544544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.544860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.544872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.545222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.545234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.545545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.545557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.545649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.545657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.545939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.545950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.546266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.546277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.546581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.546592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.546922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.546932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.547233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.547245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.547572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.547584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.547924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.547937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.548253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.548264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.548551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.548563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.548870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.548881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.549287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.549301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.549479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.549491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.549677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.549688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.550036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.550047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.550238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.550250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.550428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.550438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.550748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.550760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.551086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.551097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.551431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.551443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.551760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.551772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.552092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.552102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.552427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.552437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.552810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.552822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.553111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.553124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.553478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.553491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.553782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.553794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.554087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.554099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.554291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.554302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.950 [2024-12-06 13:37:25.554616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.950 [2024-12-06 13:37:25.554628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.950 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.554927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.554938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.555261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.555271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.555599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.555612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.555810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.555824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.556099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.556111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.556442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.556459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.556634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.556644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.556995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.557008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.557333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.557344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.557682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.557693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.557890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.557903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.558219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.558230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.558517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.558528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.558847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.558857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.559205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.559217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.559540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.559552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.559901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.559911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.560089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.560100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.560302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.560314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.560536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.560548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.560758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.560768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.561061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.561071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.561265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.561277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.561642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.561653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.561984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.561994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.562304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.562316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.562506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.562517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.562800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.562809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.563154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.563164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.563451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.563475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.563801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.563811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.564134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.564145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.564464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.564475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.564770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.564781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.565066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.565083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.565379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.565390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.565562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.565573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.951 qpair failed and we were unable to recover it. 00:29:38.951 [2024-12-06 13:37:25.565777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.951 [2024-12-06 13:37:25.565788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-12-06 13:37:25.566118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.952 [2024-12-06 13:37:25.566128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-12-06 13:37:25.566418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.952 [2024-12-06 13:37:25.566428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-12-06 13:37:25.566608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.952 [2024-12-06 13:37:25.566619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-12-06 13:37:25.566972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.952 [2024-12-06 13:37:25.566983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-12-06 13:37:25.567164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.952 [2024-12-06 13:37:25.567177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-12-06 13:37:25.567493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.952 [2024-12-06 13:37:25.567506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-12-06 13:37:25.567831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.952 [2024-12-06 13:37:25.567842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-12-06 13:37:25.568043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.952 [2024-12-06 13:37:25.568055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-12-06 13:37:25.568385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.952 [2024-12-06 13:37:25.568395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-12-06 13:37:25.568714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.952 [2024-12-06 13:37:25.568724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-12-06 13:37:25.569038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.952 [2024-12-06 13:37:25.569048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.952 qpair failed and we were unable to recover it. 00:29:38.952 [2024-12-06 13:37:25.569370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.952 [2024-12-06 13:37:25.569384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:38.952 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.569777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.569791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.569971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.569984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.570195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.570205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.570385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.570397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.570704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.570716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.571058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.571070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.571244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.571255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.571563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.571574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.571908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.571921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.572084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.572094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.572412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.572423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.572751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.572766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.572962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.572973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.573296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.573307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.573630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.573642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.573943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.573955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.574274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.574286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.574616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.574627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.574922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.574934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.575132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.575143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.575528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.575539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.575875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.575885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.576086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.576097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.576318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.576331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.576523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.576535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.576708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.576720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.577054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.577065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.577388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.577400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.577696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.577707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.578021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.578031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.578374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.578388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.578710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.578722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.579048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.579060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.579253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.579264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.230 [2024-12-06 13:37:25.579578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.230 [2024-12-06 13:37:25.579589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.230 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.579934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.579946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.580282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.580293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.580611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.580625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.580957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.580968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.581268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.581280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.581505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.581517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.581866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.581876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.582198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.582209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.582508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.582518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.582940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.582950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.583276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.583288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.583573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.583583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.583897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.583908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.584198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.584209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.584547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.584558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.584954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.584964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.585166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.585177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.585346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.585359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.585666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.585676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.585970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.585980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.586189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.586199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.586418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.586428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.586759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.586769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.587098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.587108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.587435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.587447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.587804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.587815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.588136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.588146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.588470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.588481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.588824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.588835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.589171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.589182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.589489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.589506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.589827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.589837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.590131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.590141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.590338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.590350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.590550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.590561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.590963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.591083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a3c000b90 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.591467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.591531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a38000b90 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.591801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.231 [2024-12-06 13:37:25.591815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.231 qpair failed and we were unable to recover it. 00:29:39.231 [2024-12-06 13:37:25.592221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.592233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.592403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.592414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.592706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.592717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.593048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.593059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.593286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.593296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.593619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.593629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.593921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.593936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.594283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.594294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.594609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.594623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.594932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.594943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.595294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.595305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.595639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.595650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.595832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.595842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.596138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.596149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.596475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.596488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.596673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.596682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.597014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.597026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.597220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.597232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.597572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.597582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.597775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.597784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.598017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.598028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.598309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.598319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.598673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.598685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.598848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.598857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.599175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.599186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.599546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.599556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.599907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.599917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.600270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.600282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.600629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.600640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.600958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.600969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.601297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.601309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.601586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.601597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.601796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.601807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.602146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.602159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.602437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.602448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.602663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.602674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.603018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.603028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.232 [2024-12-06 13:37:25.603341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.232 [2024-12-06 13:37:25.603353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.232 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.603708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.603720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.604047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.604058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.604272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.604281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.604577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.604588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.604938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.604949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.605302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.605313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.605626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.605639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.605992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.606003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.606323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.606332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.606664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.606675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.606994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.607004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.607324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.607334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.607668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.607680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.607939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.607950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.608274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.608285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.608486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.608497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.608810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.608820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.608989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.609000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.609234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.609244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.609561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.609572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.609910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.609920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.610226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.610238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.610525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.610539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.610733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.610741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.611099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.611110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.611447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.611463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.611808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.611818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.612011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.612022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.612191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.612202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.612522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.612535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.612740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.612749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.613087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.613097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.613279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.613290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.613629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.613640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.613936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.613946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.233 [2024-12-06 13:37:25.614273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.233 [2024-12-06 13:37:25.614283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.233 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.614629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.614643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.614972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.614982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.615151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.615160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.615504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.615516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.615825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.615837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.616123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.616135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.616475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.616486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.616797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.616808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.617094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.617104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.617409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.617419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.617732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.617743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.617959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.617971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.618317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.618330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.618631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.618641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.618835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.618847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.619165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.619175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.619502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.619514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.619852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.619862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.620191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.620201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.620487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.620500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.620792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.620802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.621195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.621205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.621526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.621538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.621874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.621885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.622213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.622224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.622582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.622594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.622914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.622925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.623255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.623265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.623585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.623597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.234 [2024-12-06 13:37:25.623925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.234 [2024-12-06 13:37:25.623935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.234 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.624262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.624272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.624596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.624606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.624906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.624916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.625245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.625255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.625452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.625470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.625791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.625803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.626130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.626142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.626426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.626436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.626724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.626735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.627010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.627020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.627198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.627207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.627414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.627425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.627739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.627749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.628081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.628092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.628401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.628417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.628745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.628755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.629051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.629062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.629382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.629393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.629681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.629691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.629983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.629993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.630316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.630328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.630667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.630678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.630873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.630885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.631065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.631076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.631273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.631288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.631557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.631568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.631898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.631908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.632236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.632246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.632540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.632551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.632738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.632748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.633091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.633101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.633417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.633428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.633734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.633746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.633896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.633906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.634095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.634105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.634308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.634321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.634629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.634642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.235 [2024-12-06 13:37:25.634831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.235 [2024-12-06 13:37:25.634843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.235 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.635200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.635311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a44000b90 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.635850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.635960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a44000b90 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.636248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.636286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a44000b90 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.636620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.636657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a44000b90 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.637009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.637042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a44000b90 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.637267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.637300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0a44000b90 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.637658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.637718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.637961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.637976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.638321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.638333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.638785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.638844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.639074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.639088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.639396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.639408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.639715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.639726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.640019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.640036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.640350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.640362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.640674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.640685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.640907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.640920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.641258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.641269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.641576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.641591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.641894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.641906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.642090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.642104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.642461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.642473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.642678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.642690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.643045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.643056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.643281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.643292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.643628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.643640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.643959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.643971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.644307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.644318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.644688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.644700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.644893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.644904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.645139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.645151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.645448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.645467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.645800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.645812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.645986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.645999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.646315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.646328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.646630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.646642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.646951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.646962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.647247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.236 [2024-12-06 13:37:25.647259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.236 qpair failed and we were unable to recover it. 00:29:39.236 [2024-12-06 13:37:25.647627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.647638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.647841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.647854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.648203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.648219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.648409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.648420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.648752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.648763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.649110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.649121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.649444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.649466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.649784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.649796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.650083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.650095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.650292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.650305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.650573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.650586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.650917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.650928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.651203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.651215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.651571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.651582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.651765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.651778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.652106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.652116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.652428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.652440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.652782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.652793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.653114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.653126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.653322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.653335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.653695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.653706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.654010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.654021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.654197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.654210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.654507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.654518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.654688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.654698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.655019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.655031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.655346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.655358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.655678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.655688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.655975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.655984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.656306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.656316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.656517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.656530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.656858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.656871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.657227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.657240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.657542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.657555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.657876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.657888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.658186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.658198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.658522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.658534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.658890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.658901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.237 qpair failed and we were unable to recover it. 00:29:39.237 [2024-12-06 13:37:25.659223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.237 [2024-12-06 13:37:25.659235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.659399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.659410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.659773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.659786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.660105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.660116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.660439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.660451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.660777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.660789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.661118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.661129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.661320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.661335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.661583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.661596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.661767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.661778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.662096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.662107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.662305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.662317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.662622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.662633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.662820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.662833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.663167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.663177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.663483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.663494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.663660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.663671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.664022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.664034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.664372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.664383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.664570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.664581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.664895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.664906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.665202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.665214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.665504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.665515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.665863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.665874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.666197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.666211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.666539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.666551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.666835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.666847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.667139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.667149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.667434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.667445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.667757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.667769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.668094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.668107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.668434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.668447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.668778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.668792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.669111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.669123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.669415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.669426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.669753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.669766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.670137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.670149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.670497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.670510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.238 qpair failed and we were unable to recover it. 00:29:39.238 [2024-12-06 13:37:25.670814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.238 [2024-12-06 13:37:25.670825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.671172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.671184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.671368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.671379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.671697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.671708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.672011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.672022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.672377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.672388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.672698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.672709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.672995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.673008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.673359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.673371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.673706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.673719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.674032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.674046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.674372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.674382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.674742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.674754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.675070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.675082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.675378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.675390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.675731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.675743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.676060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.676073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.676250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.676262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.676592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.676604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.676808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.676821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.677141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.677158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.677477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.677493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.677776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.677788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.678077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.678088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.678133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.678141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.678469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.678481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.678698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.678710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.678996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.679007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.679332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.679344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.679677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.679690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.680010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.680022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.680347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.680358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.680674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.680686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.680877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.239 [2024-12-06 13:37:25.680890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.239 qpair failed and we were unable to recover it. 00:29:39.239 [2024-12-06 13:37:25.681065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.681076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.681435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.681446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.681805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.681818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.682121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.682132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.682301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.682311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.682640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.682652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.683006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.683017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.683331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.683342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.683671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.683683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.683975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.683988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.684307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.684320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.684528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.684539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.684891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.684903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.685186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.685197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.685486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.685501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.685693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.685701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.685941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.685952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.686273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.686286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.686585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.686598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.686801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.686813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.687136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.687148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.687491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.687502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.687664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.687673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.688010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.688020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.688377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.688391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.688712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.688723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.689053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.689064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.689367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.689376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.689774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.689785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.689952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.689963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.690251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.690261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.690579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.690589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.690895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.690907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.691074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.691084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.691401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.691411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.691617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.691629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.691948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.691958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.692276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.240 [2024-12-06 13:37:25.692287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.240 qpair failed and we were unable to recover it. 00:29:39.240 [2024-12-06 13:37:25.692477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.692489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.692844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.692854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.693172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.693184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.693463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.693473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.693838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.693848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.694135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.694144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.694477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.694489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.694841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.694851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.695032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.695044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.695378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.695388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.695748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.695760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.696097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.696107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.696439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.696450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.696802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.696812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.697132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.697142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.697441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.697451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.697796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.697806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.698097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.698110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.698423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.698434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.698795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.698807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.698860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.698867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.699052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.699063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.699404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.699414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.699761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.699772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.700097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.700108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.700416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.700426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.700602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.700613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.701031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.701041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.701118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.701125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.701439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.701449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.701745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.701756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.702075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.702085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.702412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.702424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.702735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.702747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.703038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.703049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.703372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.703382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.703684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.703694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.241 [2024-12-06 13:37:25.703979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.241 [2024-12-06 13:37:25.703989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.241 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.704159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.704170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.704522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.704532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.704698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.704707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.704991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.705001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.705204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.705214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.705395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.705405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.705753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.705765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.706082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.706092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.706318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.706328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.706660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.706671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.707017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.707028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.707255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.707265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.707592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.707602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.707891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.707902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.708099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.708109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.708436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.708447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.708666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.708677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.708975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.708985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.709304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.709316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.709380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.709387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.709693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.709704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.710033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.710043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.710371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.710382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.710576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.710586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.710915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.710925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.711231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.711242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.711430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.711441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.711761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.711772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.712094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.712104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.712421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.712431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.712625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.712637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.712864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.712874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.713208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.713219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.713545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.713559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.713894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.713904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.242 [2024-12-06 13:37:25.713956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.242 [2024-12-06 13:37:25.713964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.242 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.714281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.714291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.714487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.714499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.714548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.714558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.714669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.714679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.715027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.715038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.715328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.715338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.715678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.715688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.716025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.716036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.716321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.716332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.716643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.716653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.716950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.716960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.717245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.717255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.717560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.717571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.717877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.717888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.718098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.718109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.718421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.718432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.718720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.718730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.719110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.719120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.719315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.719326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.719655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.719665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.719837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.719846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.720139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.720149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.720449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.720465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.720664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.720675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.721002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.721013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.721298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.721307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.721657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.721667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.721977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.721987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.722305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.722317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.722492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.722511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.722845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.722854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.723049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.723060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.723350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.723361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.723680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.723691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.723990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.723999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.724325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.724335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.243 [2024-12-06 13:37:25.724665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.243 [2024-12-06 13:37:25.724675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.243 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.724872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.724883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.725116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.725127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.725427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.725438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.725619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.725631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.725841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.725851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.726207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.726219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.726420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.726432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.726765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.726776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.727089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.727100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.727412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.727423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.244 [2024-12-06 13:37:25.727738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.727752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:39.244 [2024-12-06 13:37:25.727962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.727977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.728271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.728284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:39.244 [2024-12-06 13:37:25.728592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.728611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:39.244 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.244 [2024-12-06 13:37:25.728931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.728943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.729133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.729144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.729472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.729483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.729708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.729719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.729923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.729934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.730239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.730252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.730586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.730598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.730926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.730936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.731292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.731303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.731636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.731647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.731947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.731956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.732256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.732267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.732633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.732645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.732976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.732987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.733324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.733336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.733693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.733707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.734007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.734019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.244 [2024-12-06 13:37:25.734211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.244 [2024-12-06 13:37:25.734220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.244 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.734574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.734586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.734894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.734905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.735199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.735209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.735533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.735544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.735863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.735873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.736198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.736209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.736404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.736415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.736697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.736709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.736876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.736885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.737067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.737077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.737382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.737393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.737705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.737715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.737907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.737919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.738289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.738300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.738610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.738622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.738818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.738829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.739126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.739138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.739453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.739476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.739763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.739774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.740098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.740108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.740395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.740406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.740602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.740618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.740932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.740942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.741230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.741240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.741564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.741575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.741629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.741636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.741988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.741998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.742310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.742321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.742616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.742629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.742985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.742999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.743306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.743319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.743674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.743686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.743964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.743975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.744284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.744294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.744509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.744522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.744805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.744817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.745153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.745164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.245 [2024-12-06 13:37:25.745461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.245 [2024-12-06 13:37:25.745474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.245 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.745785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.745795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.746120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.746133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.746459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.746470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.746685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.746696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.747056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.747065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.747385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.747396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.747627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.747638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.748010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.748022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.748372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.748382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.748631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.748641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.748978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.748992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.749313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.749322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.749638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.749649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.749994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.750004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.750374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.750386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.750676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.750688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.750970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.750984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.751338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.751349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.751674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.751686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.751998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.752009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.752327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.752340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.752684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.752694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.753016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.753026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.753317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.753328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.753608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.753622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.753942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.753953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.754134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.754144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.754464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.754477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.754675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.754686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.755011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.755021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.755332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.755343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.755687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.755699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.755890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.755899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.756097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.756107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.756450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.756469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.756746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.756757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.757076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.757086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.246 qpair failed and we were unable to recover it. 00:29:39.246 [2024-12-06 13:37:25.757450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.246 [2024-12-06 13:37:25.757469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.757781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.757793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.758091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.758106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.758306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.758317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.758506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.758520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.758849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.758859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.759173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.759183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.759482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.759494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.759819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.759829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.760134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.760144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.760428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.760437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.760796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.760809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.761086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.761098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.761314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.761326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.761498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.761508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.761846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.761858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.762169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.762178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.762501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.762512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.762834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.762846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.763051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.763061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.763387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.763398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.763731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.763744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.764098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.764108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.764415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.764426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.764756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.764766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.765098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.765108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.765313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.765324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.765637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.765647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.765945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.765956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.766257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.766271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.766472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.766484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.766808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.766819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.767172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.767182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.767525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.767536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.767869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.767880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.768088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.768100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.247 [2024-12-06 13:37:25.768380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.247 [2024-12-06 13:37:25.768391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.247 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.768717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.768729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.768907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.768918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.769259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.769269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.769561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.769571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.769902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.769915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.770220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.770232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.770523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.770536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.770736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.770745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.771082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.771092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.771390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.771399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.771703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.771713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.248 [2024-12-06 13:37:25.772025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.772041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.772243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.772255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:39.248 [2024-12-06 13:37:25.772476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.772489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.248 [2024-12-06 13:37:25.772810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.772823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.248 [2024-12-06 13:37:25.773117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.773130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.773479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.773489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.773827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.773839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.774148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.774159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.774498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.774508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.774708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.774717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.775043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.775052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.775341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.775353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.775538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.248 [2024-12-06 13:37:25.775549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.248 qpair failed and we were unable to recover it. 00:29:39.248 [2024-12-06 13:37:25.775923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.775934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.776217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.776227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.776549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.776559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.776886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.776896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.777215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.777224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.777526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.777539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.777839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.777849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.778143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.778154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.778448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.778464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.778751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.778761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.778943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.778954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.779004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.779016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.779190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.779200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.779580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.779592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.779649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.779656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.779941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.779951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.780288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.780299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.780588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.780599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.780888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.780898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.781238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.781248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.781572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.781583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.781888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.781899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.782225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.782235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.782562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.782575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.782912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.249 [2024-12-06 13:37:25.782924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.249 qpair failed and we were unable to recover it. 00:29:39.249 [2024-12-06 13:37:25.783114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.783124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.783471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.783481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.783692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.783703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.784042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.784051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.784254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.784263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.784438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.784447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.784762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.784774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.785064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.785076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.785370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.785380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.785707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.785717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.785981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.785991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.786277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.786286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.786479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.786488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.786840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.786851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.787038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.787049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.787384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.787395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.787783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.787794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.788082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.788092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.788415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.788425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.788596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.788605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.788942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.788952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.789283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.789293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.789590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.789600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.789895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.789905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.790261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.790270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.790608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.790619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.790934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.790945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.791291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.791302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.791635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.791645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.791967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.250 [2024-12-06 13:37:25.791978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.250 qpair failed and we were unable to recover it. 00:29:39.250 [2024-12-06 13:37:25.792332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.792342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.792521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.792531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.792842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.792851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.793184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.793193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.793517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.793528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.793919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.793929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.794128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.794139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.794468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.794478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.794777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.794787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.795108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.795117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.795407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.795417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.795743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.795753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.796093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.796103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.796292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.796304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.796489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.796499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.796807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.796817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.796979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.796988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.797171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.797181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.797531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.797541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.797762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.797774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.797972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.797982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.798175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.798184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.798521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.798533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.798849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.798859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.799164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.799174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.799541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.799552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.799889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.799899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.800219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.800229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.800559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.800569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.800752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.800761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.801086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.801098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.801430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.801440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.801766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.801778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.802112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.802123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.802451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.802468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.802684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.802695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.802911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.802921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.802971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.251 [2024-12-06 13:37:25.802978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.251 qpair failed and we were unable to recover it. 00:29:39.251 [2024-12-06 13:37:25.803176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.803190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.803380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.803389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.803680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.803691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.804004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.804015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.804359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.804369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.804657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.804667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.804970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.804980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.805161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.805174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.805505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.805516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.805836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.805847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.806148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.806159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.806471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.806482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.806839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.806849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.807175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.807186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.807533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.807545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.807882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.807891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.808181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.808191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.808476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.808487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.808823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.808833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.809132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.809142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.809326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.809337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.809535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.809544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.809866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.809876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.810178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.810190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.810517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.810527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.810852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.810862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.811158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.811170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.811520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.811531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.811870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.811881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 Malloc0 00:29:39.252 [2024-12-06 13:37:25.812255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.812269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.812599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.812608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.252 [2024-12-06 13:37:25.812921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.812932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 [2024-12-06 13:37:25.813230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.813240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:39.252 [2024-12-06 13:37:25.813580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.813593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.252 [2024-12-06 13:37:25.813691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.252 [2024-12-06 13:37:25.813699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.252 qpair failed and we were unable to recover it. 00:29:39.252 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.253 [2024-12-06 13:37:25.813945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.813957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.814211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.814223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.814589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.814600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.814805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.814817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.815146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.815156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.815395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.815404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.815621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.815631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.815972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.815981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.816302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.816313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.816506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.816516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.816867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.816877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.817207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.817217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.817544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.817554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.817856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.817867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.818214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.818224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.818586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.818596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.818915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.818928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.819093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.819103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.819370] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.253 [2024-12-06 13:37:25.819448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.819462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.819810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.819822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.820020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.820031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.820202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.820213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.820412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.820422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.820673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.820684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.820976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.820989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.821284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.821295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.821578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.821590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.821786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.821796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.822150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.822160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.822478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.822488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.822821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.822831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.823016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.823027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.823229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.823241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.823566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.823576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.823885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.823895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.824243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.824252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.253 [2024-12-06 13:37:25.824586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.253 [2024-12-06 13:37:25.824596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.253 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.824917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.824927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.825250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.825261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.825435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.825444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.825772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.825783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.825978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.825989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.826157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.826169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.826506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.826515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.826869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.826879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.827194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.827204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.827548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.827557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.827722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.827731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.828047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.828059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.828262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.828273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.828503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.254 [2024-12-06 13:37:25.828514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.828835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.828846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:39.254 [2024-12-06 13:37:25.829150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.829162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.254 [2024-12-06 13:37:25.829382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.829393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.829506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.254 [2024-12-06 13:37:25.829513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.829693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.829704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.830072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.830083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.830403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.830413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.830759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.830770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.831075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.831086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.831373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.831384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.831702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.831712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.832032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.832043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.832398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.832409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.832584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.832597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.832931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.832941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.833166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.833176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.833514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.254 [2024-12-06 13:37:25.833524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.254 qpair failed and we were unable to recover it. 00:29:39.254 [2024-12-06 13:37:25.833847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.833857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.834024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.834033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.834374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.834385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.834717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.834727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.835025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.835035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.835213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.835222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.835529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.835538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.835807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.835817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.836162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.836171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.836464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.836474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.836819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.836828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.837135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.837147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.837466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.837477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.837675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.837686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.838045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.838056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.838375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.838386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.838744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.838756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.839058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.839069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.839381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.839392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.839443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.839461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.839743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.839754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.840164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.840175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.840518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.840530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.255 [2024-12-06 13:37:25.840778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.840790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:39.255 [2024-12-06 13:37:25.841094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.841106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.255 [2024-12-06 13:37:25.841435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.841448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.255 [2024-12-06 13:37:25.841764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.841777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.842128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.842139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.842424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.842436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.842639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.842651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.842967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.842978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.843183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.843194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.843363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.843376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.843577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.843589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.843776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.843787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.844120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.844132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.255 [2024-12-06 13:37:25.844483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.255 [2024-12-06 13:37:25.844495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.255 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.844727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.844738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.844959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.844970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.845308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.845319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.845518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.845530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.845825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.845837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.846128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.846141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.846477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.846488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.846819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.846831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.847146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.847157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.847484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.847497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.847831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.847843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.848133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.848142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.848430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.848441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.848799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.848810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.849158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.849168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.849348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.849358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.849681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.849690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.849976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.849986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.850326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.850337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.850597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.850607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.850937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.850947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.851268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.851278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.851578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.851588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.851939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.851950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.852285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.852296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.256 [2024-12-06 13:37:25.852512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.852524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.852839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.852850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.256 [2024-12-06 13:37:25.853181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.853192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.256 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.256 [2024-12-06 13:37:25.853523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.853535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.853865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.853875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.854074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.854084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.854401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.854411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.854723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.854733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.854786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.854793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.256 [2024-12-06 13:37:25.855121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.256 [2024-12-06 13:37:25.855132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.256 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.855432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.855442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.855626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.855638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.855988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.855999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.856288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.856298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.856626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.856636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.856976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.856986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.857316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.857328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.857639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.857649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.857986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.857997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.858290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.858302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.858599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.858610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.858913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.858925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.859251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.859260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.859500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:39.257 [2024-12-06 13:37:25.859512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b520c0 with addr=10.0.0.2, port=4420 00:29:39.257 qpair failed and we were unable to recover it. 00:29:39.257 [2024-12-06 13:37:25.859779] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.257 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.257 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:39.257 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.257 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.520 [2024-12-06 13:37:25.870708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.520 [2024-12-06 13:37:25.870810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.520 [2024-12-06 13:37:25.870835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.520 [2024-12-06 13:37:25.870844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.520 [2024-12-06 13:37:25.870850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.520 [2024-12-06 13:37:25.870872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.520 qpair failed and we were unable to recover it. 00:29:39.520 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.520 13:37:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2339507 00:29:39.520 [2024-12-06 13:37:25.880415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.520 [2024-12-06 13:37:25.880526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.520 [2024-12-06 13:37:25.880545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.520 [2024-12-06 13:37:25.880551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.520 [2024-12-06 13:37:25.880556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.520 [2024-12-06 13:37:25.880573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.520 qpair failed and we were unable to recover it. 00:29:39.520 [2024-12-06 13:37:25.890535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.520 [2024-12-06 13:37:25.890615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.520 [2024-12-06 13:37:25.890631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.520 [2024-12-06 13:37:25.890637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.520 [2024-12-06 13:37:25.890643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.520 [2024-12-06 13:37:25.890658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.520 qpair failed and we were unable to recover it. 00:29:39.520 [2024-12-06 13:37:25.900522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.520 [2024-12-06 13:37:25.900600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.520 [2024-12-06 13:37:25.900617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.520 [2024-12-06 13:37:25.900635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.520 [2024-12-06 13:37:25.900641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.520 [2024-12-06 13:37:25.900656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.520 qpair failed and we were unable to recover it. 00:29:39.520 [2024-12-06 13:37:25.910562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.520 [2024-12-06 13:37:25.910633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.520 [2024-12-06 13:37:25.910649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.520 [2024-12-06 13:37:25.910656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.520 [2024-12-06 13:37:25.910662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.520 [2024-12-06 13:37:25.910676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.520 qpair failed and we were unable to recover it. 00:29:39.520 [2024-12-06 13:37:25.920464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.520 [2024-12-06 13:37:25.920519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.520 [2024-12-06 13:37:25.920534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.520 [2024-12-06 13:37:25.920540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.520 [2024-12-06 13:37:25.920545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.520 [2024-12-06 13:37:25.920560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.520 qpair failed and we were unable to recover it. 00:29:39.520 [2024-12-06 13:37:25.930503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.520 [2024-12-06 13:37:25.930577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.520 [2024-12-06 13:37:25.930592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.520 [2024-12-06 13:37:25.930598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.520 [2024-12-06 13:37:25.930603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.520 [2024-12-06 13:37:25.930617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.520 qpair failed and we were unable to recover it. 00:29:39.520 [2024-12-06 13:37:25.940566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.520 [2024-12-06 13:37:25.940632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.520 [2024-12-06 13:37:25.940646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.520 [2024-12-06 13:37:25.940653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.520 [2024-12-06 13:37:25.940658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.520 [2024-12-06 13:37:25.940677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.520 qpair failed and we were unable to recover it. 00:29:39.520 [2024-12-06 13:37:25.950623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.520 [2024-12-06 13:37:25.950688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.520 [2024-12-06 13:37:25.950703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.520 [2024-12-06 13:37:25.950709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.520 [2024-12-06 13:37:25.950714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.520 [2024-12-06 13:37:25.950727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.520 qpair failed and we were unable to recover it. 00:29:39.520 [2024-12-06 13:37:25.960635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.520 [2024-12-06 13:37:25.960693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.520 [2024-12-06 13:37:25.960707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.520 [2024-12-06 13:37:25.960714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.520 [2024-12-06 13:37:25.960719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.520 [2024-12-06 13:37:25.960733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.520 qpair failed and we were unable to recover it. 00:29:39.520 [2024-12-06 13:37:25.970724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.520 [2024-12-06 13:37:25.970810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.520 [2024-12-06 13:37:25.970824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.520 [2024-12-06 13:37:25.970830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.520 [2024-12-06 13:37:25.970836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.520 [2024-12-06 13:37:25.970852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.520 qpair failed and we were unable to recover it. 00:29:39.520 [2024-12-06 13:37:25.980672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.520 [2024-12-06 13:37:25.980736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.520 [2024-12-06 13:37:25.980750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.520 [2024-12-06 13:37:25.980756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.520 [2024-12-06 13:37:25.980761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.520 [2024-12-06 13:37:25.980775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.520 qpair failed and we were unable to recover it. 00:29:39.520 [2024-12-06 13:37:25.990720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.520 [2024-12-06 13:37:25.990786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:25.990801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.521 [2024-12-06 13:37:25.990808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.521 [2024-12-06 13:37:25.990813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.521 [2024-12-06 13:37:25.990827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.521 qpair failed and we were unable to recover it. 00:29:39.521 [2024-12-06 13:37:26.000726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.521 [2024-12-06 13:37:26.000785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:26.000799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.521 [2024-12-06 13:37:26.000806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.521 [2024-12-06 13:37:26.000811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.521 [2024-12-06 13:37:26.000825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.521 qpair failed and we were unable to recover it. 00:29:39.521 [2024-12-06 13:37:26.010767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.521 [2024-12-06 13:37:26.010837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:26.010850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.521 [2024-12-06 13:37:26.010857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.521 [2024-12-06 13:37:26.010862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.521 [2024-12-06 13:37:26.010875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.521 qpair failed and we were unable to recover it. 00:29:39.521 [2024-12-06 13:37:26.020809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.521 [2024-12-06 13:37:26.020919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:26.020934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.521 [2024-12-06 13:37:26.020941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.521 [2024-12-06 13:37:26.020947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.521 [2024-12-06 13:37:26.020961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.521 qpair failed and we were unable to recover it. 00:29:39.521 [2024-12-06 13:37:26.030834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.521 [2024-12-06 13:37:26.030897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:26.030917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.521 [2024-12-06 13:37:26.030924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.521 [2024-12-06 13:37:26.030929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.521 [2024-12-06 13:37:26.030942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.521 qpair failed and we were unable to recover it. 00:29:39.521 [2024-12-06 13:37:26.040826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.521 [2024-12-06 13:37:26.040887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:26.040902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.521 [2024-12-06 13:37:26.040908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.521 [2024-12-06 13:37:26.040914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.521 [2024-12-06 13:37:26.040927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.521 qpair failed and we were unable to recover it. 00:29:39.521 [2024-12-06 13:37:26.050889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.521 [2024-12-06 13:37:26.050948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:26.050963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.521 [2024-12-06 13:37:26.050970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.521 [2024-12-06 13:37:26.050975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.521 [2024-12-06 13:37:26.050989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.521 qpair failed and we were unable to recover it. 00:29:39.521 [2024-12-06 13:37:26.060875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.521 [2024-12-06 13:37:26.060939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:26.060953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.521 [2024-12-06 13:37:26.060960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.521 [2024-12-06 13:37:26.060965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.521 [2024-12-06 13:37:26.060979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.521 qpair failed and we were unable to recover it. 00:29:39.521 [2024-12-06 13:37:26.070987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.521 [2024-12-06 13:37:26.071070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:26.071085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.521 [2024-12-06 13:37:26.071091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.521 [2024-12-06 13:37:26.071097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.521 [2024-12-06 13:37:26.071116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.521 qpair failed and we were unable to recover it. 00:29:39.521 [2024-12-06 13:37:26.081033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.521 [2024-12-06 13:37:26.081096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:26.081112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.521 [2024-12-06 13:37:26.081118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.521 [2024-12-06 13:37:26.081122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.521 [2024-12-06 13:37:26.081136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.521 qpair failed and we were unable to recover it. 00:29:39.521 [2024-12-06 13:37:26.091094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.521 [2024-12-06 13:37:26.091186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:26.091201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.521 [2024-12-06 13:37:26.091208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.521 [2024-12-06 13:37:26.091213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.521 [2024-12-06 13:37:26.091227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.521 qpair failed and we were unable to recover it. 00:29:39.521 [2024-12-06 13:37:26.101071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.521 [2024-12-06 13:37:26.101138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:26.101153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.521 [2024-12-06 13:37:26.101159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.521 [2024-12-06 13:37:26.101164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.521 [2024-12-06 13:37:26.101178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.521 qpair failed and we were unable to recover it. 00:29:39.521 [2024-12-06 13:37:26.111087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.521 [2024-12-06 13:37:26.111160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:26.111176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.521 [2024-12-06 13:37:26.111183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.521 [2024-12-06 13:37:26.111189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.521 [2024-12-06 13:37:26.111202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.521 qpair failed and we were unable to recover it. 00:29:39.521 [2024-12-06 13:37:26.121054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.521 [2024-12-06 13:37:26.121113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.521 [2024-12-06 13:37:26.121130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.522 [2024-12-06 13:37:26.121136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.522 [2024-12-06 13:37:26.121141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.522 [2024-12-06 13:37:26.121155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.522 qpair failed and we were unable to recover it. 00:29:39.522 [2024-12-06 13:37:26.131130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.522 [2024-12-06 13:37:26.131192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.522 [2024-12-06 13:37:26.131207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.522 [2024-12-06 13:37:26.131214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.522 [2024-12-06 13:37:26.131219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.522 [2024-12-06 13:37:26.131233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.522 qpair failed and we were unable to recover it. 00:29:39.522 [2024-12-06 13:37:26.141159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.522 [2024-12-06 13:37:26.141224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.522 [2024-12-06 13:37:26.141240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.522 [2024-12-06 13:37:26.141246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.522 [2024-12-06 13:37:26.141252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.522 [2024-12-06 13:37:26.141266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.522 qpair failed and we were unable to recover it. 00:29:39.522 [2024-12-06 13:37:26.151175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.522 [2024-12-06 13:37:26.151248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.522 [2024-12-06 13:37:26.151264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.522 [2024-12-06 13:37:26.151270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.522 [2024-12-06 13:37:26.151275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.522 [2024-12-06 13:37:26.151288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.522 qpair failed and we were unable to recover it. 00:29:39.522 [2024-12-06 13:37:26.161207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.522 [2024-12-06 13:37:26.161269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.522 [2024-12-06 13:37:26.161289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.522 [2024-12-06 13:37:26.161295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.522 [2024-12-06 13:37:26.161301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.522 [2024-12-06 13:37:26.161314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.522 qpair failed and we were unable to recover it. 00:29:39.522 [2024-12-06 13:37:26.171232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.522 [2024-12-06 13:37:26.171292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.522 [2024-12-06 13:37:26.171308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.522 [2024-12-06 13:37:26.171314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.522 [2024-12-06 13:37:26.171319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.522 [2024-12-06 13:37:26.171333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.522 qpair failed and we were unable to recover it. 00:29:39.784 [2024-12-06 13:37:26.181333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.784 [2024-12-06 13:37:26.181408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.784 [2024-12-06 13:37:26.181423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.784 [2024-12-06 13:37:26.181429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.784 [2024-12-06 13:37:26.181435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.784 [2024-12-06 13:37:26.181449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.784 qpair failed and we were unable to recover it. 00:29:39.784 [2024-12-06 13:37:26.191334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.784 [2024-12-06 13:37:26.191409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.784 [2024-12-06 13:37:26.191431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.784 [2024-12-06 13:37:26.191438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.784 [2024-12-06 13:37:26.191444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.784 [2024-12-06 13:37:26.191469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.785 qpair failed and we were unable to recover it. 00:29:39.785 [2024-12-06 13:37:26.201320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.785 [2024-12-06 13:37:26.201373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.785 [2024-12-06 13:37:26.201388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.785 [2024-12-06 13:37:26.201395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.785 [2024-12-06 13:37:26.201400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.785 [2024-12-06 13:37:26.201419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.785 qpair failed and we were unable to recover it. 00:29:39.785 [2024-12-06 13:37:26.211334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.785 [2024-12-06 13:37:26.211387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.785 [2024-12-06 13:37:26.211403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.785 [2024-12-06 13:37:26.211410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.785 [2024-12-06 13:37:26.211415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.785 [2024-12-06 13:37:26.211430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.785 qpair failed and we were unable to recover it. 00:29:39.785 [2024-12-06 13:37:26.221348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.785 [2024-12-06 13:37:26.221410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.785 [2024-12-06 13:37:26.221426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.785 [2024-12-06 13:37:26.221432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.785 [2024-12-06 13:37:26.221437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.785 [2024-12-06 13:37:26.221451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.785 qpair failed and we were unable to recover it. 00:29:39.785 [2024-12-06 13:37:26.231311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.785 [2024-12-06 13:37:26.231376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.785 [2024-12-06 13:37:26.231390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.785 [2024-12-06 13:37:26.231397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.785 [2024-12-06 13:37:26.231403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.785 [2024-12-06 13:37:26.231417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.785 qpair failed and we were unable to recover it. 00:29:39.785 [2024-12-06 13:37:26.241461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.785 [2024-12-06 13:37:26.241519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.785 [2024-12-06 13:37:26.241534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.785 [2024-12-06 13:37:26.241541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.785 [2024-12-06 13:37:26.241546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.785 [2024-12-06 13:37:26.241560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.785 qpair failed and we were unable to recover it. 00:29:39.785 [2024-12-06 13:37:26.251491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.785 [2024-12-06 13:37:26.251551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.785 [2024-12-06 13:37:26.251566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.785 [2024-12-06 13:37:26.251573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.785 [2024-12-06 13:37:26.251579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.785 [2024-12-06 13:37:26.251592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.785 qpair failed and we were unable to recover it. 00:29:39.785 [2024-12-06 13:37:26.261519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.785 [2024-12-06 13:37:26.261581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.785 [2024-12-06 13:37:26.261595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.785 [2024-12-06 13:37:26.261601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.785 [2024-12-06 13:37:26.261607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.785 [2024-12-06 13:37:26.261620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.785 qpair failed and we were unable to recover it. 00:29:39.785 [2024-12-06 13:37:26.271562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.785 [2024-12-06 13:37:26.271630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.785 [2024-12-06 13:37:26.271644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.785 [2024-12-06 13:37:26.271650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.785 [2024-12-06 13:37:26.271656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.785 [2024-12-06 13:37:26.271669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.785 qpair failed and we were unable to recover it. 00:29:39.785 [2024-12-06 13:37:26.281548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.785 [2024-12-06 13:37:26.281600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.785 [2024-12-06 13:37:26.281615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.785 [2024-12-06 13:37:26.281621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.785 [2024-12-06 13:37:26.281626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.785 [2024-12-06 13:37:26.281639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.785 qpair failed and we were unable to recover it. 00:29:39.785 [2024-12-06 13:37:26.291573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.785 [2024-12-06 13:37:26.291630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.785 [2024-12-06 13:37:26.291649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.785 [2024-12-06 13:37:26.291656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.785 [2024-12-06 13:37:26.291660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.785 [2024-12-06 13:37:26.291674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.785 qpair failed and we were unable to recover it. 00:29:39.785 [2024-12-06 13:37:26.301637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.785 [2024-12-06 13:37:26.301702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.785 [2024-12-06 13:37:26.301718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.786 [2024-12-06 13:37:26.301724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.786 [2024-12-06 13:37:26.301730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.786 [2024-12-06 13:37:26.301744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.786 qpair failed and we were unable to recover it. 00:29:39.786 [2024-12-06 13:37:26.311622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.786 [2024-12-06 13:37:26.311689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.786 [2024-12-06 13:37:26.311703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.786 [2024-12-06 13:37:26.311709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.786 [2024-12-06 13:37:26.311715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.786 [2024-12-06 13:37:26.311728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.786 qpair failed and we were unable to recover it. 00:29:39.786 [2024-12-06 13:37:26.321674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.786 [2024-12-06 13:37:26.321737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.786 [2024-12-06 13:37:26.321752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.786 [2024-12-06 13:37:26.321759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.786 [2024-12-06 13:37:26.321764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.786 [2024-12-06 13:37:26.321777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.786 qpair failed and we were unable to recover it. 00:29:39.786 [2024-12-06 13:37:26.331717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.786 [2024-12-06 13:37:26.331793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.786 [2024-12-06 13:37:26.331808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.786 [2024-12-06 13:37:26.331814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.786 [2024-12-06 13:37:26.331819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.786 [2024-12-06 13:37:26.331839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.786 qpair failed and we were unable to recover it. 00:29:39.786 [2024-12-06 13:37:26.341720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.786 [2024-12-06 13:37:26.341785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.786 [2024-12-06 13:37:26.341799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.786 [2024-12-06 13:37:26.341806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.786 [2024-12-06 13:37:26.341811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.786 [2024-12-06 13:37:26.341824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.786 qpair failed and we were unable to recover it. 00:29:39.786 [2024-12-06 13:37:26.351812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.786 [2024-12-06 13:37:26.351882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.786 [2024-12-06 13:37:26.351897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.786 [2024-12-06 13:37:26.351904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.786 [2024-12-06 13:37:26.351909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.786 [2024-12-06 13:37:26.351922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.786 qpair failed and we were unable to recover it. 00:29:39.786 [2024-12-06 13:37:26.361798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.786 [2024-12-06 13:37:26.361857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.786 [2024-12-06 13:37:26.361872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.786 [2024-12-06 13:37:26.361879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.786 [2024-12-06 13:37:26.361885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.786 [2024-12-06 13:37:26.361899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.786 qpair failed and we were unable to recover it. 00:29:39.786 [2024-12-06 13:37:26.371838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.786 [2024-12-06 13:37:26.371905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.786 [2024-12-06 13:37:26.371918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.786 [2024-12-06 13:37:26.371925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.786 [2024-12-06 13:37:26.371930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.786 [2024-12-06 13:37:26.371943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.786 qpair failed and we were unable to recover it. 00:29:39.786 [2024-12-06 13:37:26.381897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.786 [2024-12-06 13:37:26.381968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.786 [2024-12-06 13:37:26.381984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.786 [2024-12-06 13:37:26.381990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.786 [2024-12-06 13:37:26.381995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.786 [2024-12-06 13:37:26.382009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.786 qpair failed and we were unable to recover it. 00:29:39.786 [2024-12-06 13:37:26.391978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.786 [2024-12-06 13:37:26.392092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.786 [2024-12-06 13:37:26.392108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.786 [2024-12-06 13:37:26.392114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.786 [2024-12-06 13:37:26.392120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.786 [2024-12-06 13:37:26.392134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.786 qpair failed and we were unable to recover it. 00:29:39.786 [2024-12-06 13:37:26.401818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.786 [2024-12-06 13:37:26.401879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.786 [2024-12-06 13:37:26.401895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.786 [2024-12-06 13:37:26.401901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.786 [2024-12-06 13:37:26.401906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.786 [2024-12-06 13:37:26.401921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.786 qpair failed and we were unable to recover it. 00:29:39.786 [2024-12-06 13:37:26.411925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.787 [2024-12-06 13:37:26.411990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.787 [2024-12-06 13:37:26.412006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.787 [2024-12-06 13:37:26.412012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.787 [2024-12-06 13:37:26.412017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.787 [2024-12-06 13:37:26.412032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.787 qpair failed and we were unable to recover it. 00:29:39.787 [2024-12-06 13:37:26.421867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.787 [2024-12-06 13:37:26.421954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.787 [2024-12-06 13:37:26.421975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.787 [2024-12-06 13:37:26.421982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.787 [2024-12-06 13:37:26.421988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.787 [2024-12-06 13:37:26.422001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.787 qpair failed and we were unable to recover it. 00:29:39.787 [2024-12-06 13:37:26.432036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.787 [2024-12-06 13:37:26.432113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.787 [2024-12-06 13:37:26.432128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.787 [2024-12-06 13:37:26.432134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.787 [2024-12-06 13:37:26.432140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:39.787 [2024-12-06 13:37:26.432153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.787 qpair failed and we were unable to recover it. 00:29:40.049 [2024-12-06 13:37:26.442043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.049 [2024-12-06 13:37:26.442109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.049 [2024-12-06 13:37:26.442124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.049 [2024-12-06 13:37:26.442130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.049 [2024-12-06 13:37:26.442135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.049 [2024-12-06 13:37:26.442149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.049 qpair failed and we were unable to recover it. 00:29:40.049 [2024-12-06 13:37:26.452076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.049 [2024-12-06 13:37:26.452137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.050 [2024-12-06 13:37:26.452151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.050 [2024-12-06 13:37:26.452158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.050 [2024-12-06 13:37:26.452163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.050 [2024-12-06 13:37:26.452177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-12-06 13:37:26.462156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.050 [2024-12-06 13:37:26.462224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.050 [2024-12-06 13:37:26.462238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.050 [2024-12-06 13:37:26.462244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.050 [2024-12-06 13:37:26.462250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.050 [2024-12-06 13:37:26.462269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-12-06 13:37:26.472137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.050 [2024-12-06 13:37:26.472207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.050 [2024-12-06 13:37:26.472222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.050 [2024-12-06 13:37:26.472229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.050 [2024-12-06 13:37:26.472234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.050 [2024-12-06 13:37:26.472247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-12-06 13:37:26.482143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.050 [2024-12-06 13:37:26.482204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.050 [2024-12-06 13:37:26.482218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.050 [2024-12-06 13:37:26.482224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.050 [2024-12-06 13:37:26.482229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.050 [2024-12-06 13:37:26.482243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-12-06 13:37:26.492156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.050 [2024-12-06 13:37:26.492220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.050 [2024-12-06 13:37:26.492234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.050 [2024-12-06 13:37:26.492241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.050 [2024-12-06 13:37:26.492246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.050 [2024-12-06 13:37:26.492259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-12-06 13:37:26.502239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.050 [2024-12-06 13:37:26.502313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.050 [2024-12-06 13:37:26.502326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.050 [2024-12-06 13:37:26.502333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.050 [2024-12-06 13:37:26.502338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.050 [2024-12-06 13:37:26.502352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-12-06 13:37:26.512320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.050 [2024-12-06 13:37:26.512380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.050 [2024-12-06 13:37:26.512395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.050 [2024-12-06 13:37:26.512402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.050 [2024-12-06 13:37:26.512407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.050 [2024-12-06 13:37:26.512420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-12-06 13:37:26.522321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.050 [2024-12-06 13:37:26.522387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.050 [2024-12-06 13:37:26.522403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.050 [2024-12-06 13:37:26.522409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.050 [2024-12-06 13:37:26.522414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.050 [2024-12-06 13:37:26.522429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-12-06 13:37:26.532291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.050 [2024-12-06 13:37:26.532344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.050 [2024-12-06 13:37:26.532358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.050 [2024-12-06 13:37:26.532365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.050 [2024-12-06 13:37:26.532370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.050 [2024-12-06 13:37:26.532383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-12-06 13:37:26.542346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.050 [2024-12-06 13:37:26.542408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.050 [2024-12-06 13:37:26.542422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.050 [2024-12-06 13:37:26.542429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.050 [2024-12-06 13:37:26.542434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.050 [2024-12-06 13:37:26.542448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-12-06 13:37:26.552302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.050 [2024-12-06 13:37:26.552373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.050 [2024-12-06 13:37:26.552397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.050 [2024-12-06 13:37:26.552404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.050 [2024-12-06 13:37:26.552409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.050 [2024-12-06 13:37:26.552422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.050 qpair failed and we were unable to recover it. 00:29:40.050 [2024-12-06 13:37:26.562407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.051 [2024-12-06 13:37:26.562465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.051 [2024-12-06 13:37:26.562480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.051 [2024-12-06 13:37:26.562487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.051 [2024-12-06 13:37:26.562492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.051 [2024-12-06 13:37:26.562505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-12-06 13:37:26.572421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.051 [2024-12-06 13:37:26.572488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.051 [2024-12-06 13:37:26.572502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.051 [2024-12-06 13:37:26.572508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.051 [2024-12-06 13:37:26.572513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.051 [2024-12-06 13:37:26.572527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-12-06 13:37:26.582465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.051 [2024-12-06 13:37:26.582565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.051 [2024-12-06 13:37:26.582579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.051 [2024-12-06 13:37:26.582587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.051 [2024-12-06 13:37:26.582593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.051 [2024-12-06 13:37:26.582607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.051 qpair failed and we were unable to recover it. 00:29:40.051 [2024-12-06 13:37:26.592523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.051 [2024-12-06 13:37:26.592592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.051 [2024-12-06 13:37:26.592606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.051 [2024-12-06 13:37:26.592613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.051 [2024-12-06 13:37:26.592623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.052 [2024-12-06 13:37:26.592636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-12-06 13:37:26.602494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.052 [2024-12-06 13:37:26.602557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.052 [2024-12-06 13:37:26.602572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.052 [2024-12-06 13:37:26.602579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.052 [2024-12-06 13:37:26.602585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.052 [2024-12-06 13:37:26.602599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-12-06 13:37:26.612444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.052 [2024-12-06 13:37:26.612508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.052 [2024-12-06 13:37:26.612523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.052 [2024-12-06 13:37:26.612529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.052 [2024-12-06 13:37:26.612534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.052 [2024-12-06 13:37:26.612548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-12-06 13:37:26.622505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.052 [2024-12-06 13:37:26.622573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.052 [2024-12-06 13:37:26.622589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.052 [2024-12-06 13:37:26.622596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.052 [2024-12-06 13:37:26.622601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.052 [2024-12-06 13:37:26.622616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.052 qpair failed and we were unable to recover it. 00:29:40.052 [2024-12-06 13:37:26.632652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.052 [2024-12-06 13:37:26.632727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.052 [2024-12-06 13:37:26.632742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.053 [2024-12-06 13:37:26.632749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.053 [2024-12-06 13:37:26.632754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.053 [2024-12-06 13:37:26.632769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-12-06 13:37:26.642617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.053 [2024-12-06 13:37:26.642676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.053 [2024-12-06 13:37:26.642691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.053 [2024-12-06 13:37:26.642697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.053 [2024-12-06 13:37:26.642703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.053 [2024-12-06 13:37:26.642716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-12-06 13:37:26.652713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.053 [2024-12-06 13:37:26.652778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.053 [2024-12-06 13:37:26.652792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.053 [2024-12-06 13:37:26.652799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.053 [2024-12-06 13:37:26.652805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.053 [2024-12-06 13:37:26.652819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-12-06 13:37:26.662777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.053 [2024-12-06 13:37:26.662846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.053 [2024-12-06 13:37:26.662863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.053 [2024-12-06 13:37:26.662869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.053 [2024-12-06 13:37:26.662875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.053 [2024-12-06 13:37:26.662888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-12-06 13:37:26.672731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.053 [2024-12-06 13:37:26.672799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.053 [2024-12-06 13:37:26.672814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.053 [2024-12-06 13:37:26.672820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.053 [2024-12-06 13:37:26.672825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.053 [2024-12-06 13:37:26.672839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-12-06 13:37:26.682780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.053 [2024-12-06 13:37:26.682872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.053 [2024-12-06 13:37:26.682891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.053 [2024-12-06 13:37:26.682898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.053 [2024-12-06 13:37:26.682904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.053 [2024-12-06 13:37:26.682917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.053 qpair failed and we were unable to recover it. 00:29:40.053 [2024-12-06 13:37:26.692722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.053 [2024-12-06 13:37:26.692785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.053 [2024-12-06 13:37:26.692799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.054 [2024-12-06 13:37:26.692806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.054 [2024-12-06 13:37:26.692811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.054 [2024-12-06 13:37:26.692825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.054 [2024-12-06 13:37:26.702833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.054 [2024-12-06 13:37:26.702896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.054 [2024-12-06 13:37:26.702910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.054 [2024-12-06 13:37:26.702916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.054 [2024-12-06 13:37:26.702922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.054 [2024-12-06 13:37:26.702935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.054 qpair failed and we were unable to recover it. 00:29:40.316 [2024-12-06 13:37:26.712910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.316 [2024-12-06 13:37:26.712984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.317 [2024-12-06 13:37:26.712998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.317 [2024-12-06 13:37:26.713004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.317 [2024-12-06 13:37:26.713009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.317 [2024-12-06 13:37:26.713023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.317 qpair failed and we were unable to recover it. 00:29:40.317 [2024-12-06 13:37:26.722900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.317 [2024-12-06 13:37:26.722970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.317 [2024-12-06 13:37:26.722985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.317 [2024-12-06 13:37:26.722991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.317 [2024-12-06 13:37:26.723001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.317 [2024-12-06 13:37:26.723014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.317 qpair failed and we were unable to recover it. 00:29:40.317 [2024-12-06 13:37:26.732925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.317 [2024-12-06 13:37:26.732987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.317 [2024-12-06 13:37:26.733003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.317 [2024-12-06 13:37:26.733009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.317 [2024-12-06 13:37:26.733015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.317 [2024-12-06 13:37:26.733028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.317 qpair failed and we were unable to recover it. 00:29:40.317 [2024-12-06 13:37:26.742968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.317 [2024-12-06 13:37:26.743031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.317 [2024-12-06 13:37:26.743046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.317 [2024-12-06 13:37:26.743052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.317 [2024-12-06 13:37:26.743058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.317 [2024-12-06 13:37:26.743072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.317 qpair failed and we were unable to recover it. 00:29:40.317 [2024-12-06 13:37:26.753030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.317 [2024-12-06 13:37:26.753113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.317 [2024-12-06 13:37:26.753127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.317 [2024-12-06 13:37:26.753134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.317 [2024-12-06 13:37:26.753139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.317 [2024-12-06 13:37:26.753153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.317 qpair failed and we were unable to recover it. 00:29:40.317 [2024-12-06 13:37:26.763032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.317 [2024-12-06 13:37:26.763090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.317 [2024-12-06 13:37:26.763105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.317 [2024-12-06 13:37:26.763111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.317 [2024-12-06 13:37:26.763117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.317 [2024-12-06 13:37:26.763130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.317 qpair failed and we were unable to recover it. 00:29:40.317 [2024-12-06 13:37:26.773010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.317 [2024-12-06 13:37:26.773066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.317 [2024-12-06 13:37:26.773082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.317 [2024-12-06 13:37:26.773088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.317 [2024-12-06 13:37:26.773094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.317 [2024-12-06 13:37:26.773107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.317 qpair failed and we were unable to recover it. 00:29:40.317 [2024-12-06 13:37:26.782997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.317 [2024-12-06 13:37:26.783064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.317 [2024-12-06 13:37:26.783081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.317 [2024-12-06 13:37:26.783089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.317 [2024-12-06 13:37:26.783094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.317 [2024-12-06 13:37:26.783111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.317 qpair failed and we were unable to recover it. 00:29:40.317 [2024-12-06 13:37:26.793156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.317 [2024-12-06 13:37:26.793253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.317 [2024-12-06 13:37:26.793269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.317 [2024-12-06 13:37:26.793277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.317 [2024-12-06 13:37:26.793282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.317 [2024-12-06 13:37:26.793297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.317 qpair failed and we were unable to recover it. 00:29:40.317 [2024-12-06 13:37:26.803143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.317 [2024-12-06 13:37:26.803203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.317 [2024-12-06 13:37:26.803237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.317 [2024-12-06 13:37:26.803245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.317 [2024-12-06 13:37:26.803250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.317 [2024-12-06 13:37:26.803272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.317 qpair failed and we were unable to recover it. 00:29:40.317 [2024-12-06 13:37:26.813162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.317 [2024-12-06 13:37:26.813268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.317 [2024-12-06 13:37:26.813308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.317 [2024-12-06 13:37:26.813317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.317 [2024-12-06 13:37:26.813323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.317 [2024-12-06 13:37:26.813343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.317 qpair failed and we were unable to recover it. 00:29:40.317 [2024-12-06 13:37:26.823234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.317 [2024-12-06 13:37:26.823300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.317 [2024-12-06 13:37:26.823317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.317 [2024-12-06 13:37:26.823324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.317 [2024-12-06 13:37:26.823329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.317 [2024-12-06 13:37:26.823344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.317 qpair failed and we were unable to recover it. 00:29:40.317 [2024-12-06 13:37:26.833255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.317 [2024-12-06 13:37:26.833319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.317 [2024-12-06 13:37:26.833335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.317 [2024-12-06 13:37:26.833341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.317 [2024-12-06 13:37:26.833347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.318 [2024-12-06 13:37:26.833363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.318 qpair failed and we were unable to recover it. 00:29:40.318 [2024-12-06 13:37:26.843285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.318 [2024-12-06 13:37:26.843353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.318 [2024-12-06 13:37:26.843369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.318 [2024-12-06 13:37:26.843375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.318 [2024-12-06 13:37:26.843381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.318 [2024-12-06 13:37:26.843394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.318 qpair failed and we were unable to recover it. 00:29:40.318 [2024-12-06 13:37:26.853299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.318 [2024-12-06 13:37:26.853397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.318 [2024-12-06 13:37:26.853413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.318 [2024-12-06 13:37:26.853419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.318 [2024-12-06 13:37:26.853429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.318 [2024-12-06 13:37:26.853443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.318 qpair failed and we were unable to recover it. 00:29:40.318 [2024-12-06 13:37:26.863342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.318 [2024-12-06 13:37:26.863403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.318 [2024-12-06 13:37:26.863418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.318 [2024-12-06 13:37:26.863424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.318 [2024-12-06 13:37:26.863428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.318 [2024-12-06 13:37:26.863442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.318 qpair failed and we were unable to recover it. 00:29:40.318 [2024-12-06 13:37:26.873412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.318 [2024-12-06 13:37:26.873474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.318 [2024-12-06 13:37:26.873489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.318 [2024-12-06 13:37:26.873496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.318 [2024-12-06 13:37:26.873501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.318 [2024-12-06 13:37:26.873515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.318 qpair failed and we were unable to recover it. 00:29:40.318 [2024-12-06 13:37:26.883386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.318 [2024-12-06 13:37:26.883464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.318 [2024-12-06 13:37:26.883479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.318 [2024-12-06 13:37:26.883485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.318 [2024-12-06 13:37:26.883490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.318 [2024-12-06 13:37:26.883503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.318 qpair failed and we were unable to recover it. 00:29:40.318 [2024-12-06 13:37:26.893423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.318 [2024-12-06 13:37:26.893491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.318 [2024-12-06 13:37:26.893506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.318 [2024-12-06 13:37:26.893512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.318 [2024-12-06 13:37:26.893518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.318 [2024-12-06 13:37:26.893532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.318 qpair failed and we were unable to recover it. 00:29:40.318 [2024-12-06 13:37:26.903466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.318 [2024-12-06 13:37:26.903538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.318 [2024-12-06 13:37:26.903552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.318 [2024-12-06 13:37:26.903558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.318 [2024-12-06 13:37:26.903563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.318 [2024-12-06 13:37:26.903577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.318 qpair failed and we were unable to recover it. 00:29:40.318 [2024-12-06 13:37:26.913518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.318 [2024-12-06 13:37:26.913583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.318 [2024-12-06 13:37:26.913598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.318 [2024-12-06 13:37:26.913605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.318 [2024-12-06 13:37:26.913609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.318 [2024-12-06 13:37:26.913623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.318 qpair failed and we were unable to recover it. 00:29:40.318 [2024-12-06 13:37:26.923532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.318 [2024-12-06 13:37:26.923589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.318 [2024-12-06 13:37:26.923604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.318 [2024-12-06 13:37:26.923610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.318 [2024-12-06 13:37:26.923616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.318 [2024-12-06 13:37:26.923629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.318 qpair failed and we were unable to recover it. 00:29:40.318 [2024-12-06 13:37:26.933529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.318 [2024-12-06 13:37:26.933644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.318 [2024-12-06 13:37:26.933659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.318 [2024-12-06 13:37:26.933666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.318 [2024-12-06 13:37:26.933671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.318 [2024-12-06 13:37:26.933686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.318 qpair failed and we were unable to recover it. 00:29:40.318 [2024-12-06 13:37:26.943587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.318 [2024-12-06 13:37:26.943651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.318 [2024-12-06 13:37:26.943671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.318 [2024-12-06 13:37:26.943678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.318 [2024-12-06 13:37:26.943683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.318 [2024-12-06 13:37:26.943697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.318 qpair failed and we were unable to recover it. 00:29:40.318 [2024-12-06 13:37:26.953630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.318 [2024-12-06 13:37:26.953705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.318 [2024-12-06 13:37:26.953719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.318 [2024-12-06 13:37:26.953725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.318 [2024-12-06 13:37:26.953731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.318 [2024-12-06 13:37:26.953745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.318 qpair failed and we were unable to recover it. 00:29:40.318 [2024-12-06 13:37:26.963643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.318 [2024-12-06 13:37:26.963701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.318 [2024-12-06 13:37:26.963715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.318 [2024-12-06 13:37:26.963721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.318 [2024-12-06 13:37:26.963727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.319 [2024-12-06 13:37:26.963740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.319 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:26.973683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:26.973742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:26.973756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:26.973763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:26.973769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:26.973782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:26.983695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:26.983756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:26.983769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:26.983775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:26.983785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:26.983798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:26.993634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:26.993695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:26.993709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:26.993716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:26.993721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:26.993734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:27.003653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:27.003706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:27.003720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:27.003726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:27.003731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:27.003744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:27.013758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:27.013813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:27.013825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:27.013831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:27.013837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:27.013849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:27.023826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:27.023881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:27.023894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:27.023900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:27.023905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:27.023917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:27.033824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:27.033878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:27.033891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:27.033897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:27.033902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:27.033914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:27.043853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:27.043906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:27.043920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:27.043926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:27.043931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:27.043943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:27.053878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:27.053928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:27.053944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:27.053949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:27.053954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:27.053967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:27.063925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:27.063985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:27.063996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:27.064002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:27.064007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:27.064018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:27.073918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:27.073967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:27.073985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:27.073990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:27.073995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:27.074006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:27.083959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:27.084043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:27.084055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:27.084061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:27.084066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:27.084078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:27.093994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:27.094045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:27.094057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:27.094062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:27.094067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:27.094078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:27.104025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:27.104097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:27.104109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:27.104115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:27.104120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:27.104131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:27.113888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:27.113943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:27.113954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:27.113959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:27.113968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.581 [2024-12-06 13:37:27.113978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.581 qpair failed and we were unable to recover it. 00:29:40.581 [2024-12-06 13:37:27.124068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.581 [2024-12-06 13:37:27.124116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.581 [2024-12-06 13:37:27.124127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.581 [2024-12-06 13:37:27.124133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.581 [2024-12-06 13:37:27.124138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.582 [2024-12-06 13:37:27.124148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-12-06 13:37:27.134061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.582 [2024-12-06 13:37:27.134111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.582 [2024-12-06 13:37:27.134121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.582 [2024-12-06 13:37:27.134127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.582 [2024-12-06 13:37:27.134132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.582 [2024-12-06 13:37:27.134142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-12-06 13:37:27.144103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.582 [2024-12-06 13:37:27.144158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.582 [2024-12-06 13:37:27.144169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.582 [2024-12-06 13:37:27.144175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.582 [2024-12-06 13:37:27.144179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.582 [2024-12-06 13:37:27.144190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-12-06 13:37:27.154131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.582 [2024-12-06 13:37:27.154181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.582 [2024-12-06 13:37:27.154192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.582 [2024-12-06 13:37:27.154197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.582 [2024-12-06 13:37:27.154202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.582 [2024-12-06 13:37:27.154212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-12-06 13:37:27.164166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.582 [2024-12-06 13:37:27.164217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.582 [2024-12-06 13:37:27.164228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.582 [2024-12-06 13:37:27.164234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.582 [2024-12-06 13:37:27.164238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.582 [2024-12-06 13:37:27.164249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-12-06 13:37:27.174192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.582 [2024-12-06 13:37:27.174239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.582 [2024-12-06 13:37:27.174260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.582 [2024-12-06 13:37:27.174266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.582 [2024-12-06 13:37:27.174271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.582 [2024-12-06 13:37:27.174286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-12-06 13:37:27.184225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.582 [2024-12-06 13:37:27.184280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.582 [2024-12-06 13:37:27.184300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.582 [2024-12-06 13:37:27.184307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.582 [2024-12-06 13:37:27.184312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.582 [2024-12-06 13:37:27.184327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-12-06 13:37:27.194183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.582 [2024-12-06 13:37:27.194235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.582 [2024-12-06 13:37:27.194249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.582 [2024-12-06 13:37:27.194254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.582 [2024-12-06 13:37:27.194259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.582 [2024-12-06 13:37:27.194271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-12-06 13:37:27.204203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.582 [2024-12-06 13:37:27.204246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.582 [2024-12-06 13:37:27.204266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.582 [2024-12-06 13:37:27.204272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.582 [2024-12-06 13:37:27.204277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.582 [2024-12-06 13:37:27.204288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-12-06 13:37:27.214317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.582 [2024-12-06 13:37:27.214362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.582 [2024-12-06 13:37:27.214373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.582 [2024-12-06 13:37:27.214378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.582 [2024-12-06 13:37:27.214383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.582 [2024-12-06 13:37:27.214394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-12-06 13:37:27.224348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.582 [2024-12-06 13:37:27.224398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.582 [2024-12-06 13:37:27.224409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.582 [2024-12-06 13:37:27.224414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.582 [2024-12-06 13:37:27.224419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.582 [2024-12-06 13:37:27.224429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.582 [2024-12-06 13:37:27.234323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.582 [2024-12-06 13:37:27.234369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.582 [2024-12-06 13:37:27.234381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.582 [2024-12-06 13:37:27.234387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.582 [2024-12-06 13:37:27.234392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.582 [2024-12-06 13:37:27.234402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.582 qpair failed and we were unable to recover it. 00:29:40.849 [2024-12-06 13:37:27.244313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.849 [2024-12-06 13:37:27.244358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.849 [2024-12-06 13:37:27.244369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.849 [2024-12-06 13:37:27.244374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.849 [2024-12-06 13:37:27.244382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.849 [2024-12-06 13:37:27.244392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.849 qpair failed and we were unable to recover it. 00:29:40.849 [2024-12-06 13:37:27.254275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.849 [2024-12-06 13:37:27.254320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.849 [2024-12-06 13:37:27.254331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.849 [2024-12-06 13:37:27.254337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.849 [2024-12-06 13:37:27.254342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.849 [2024-12-06 13:37:27.254353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.849 qpair failed and we were unable to recover it. 00:29:40.849 [2024-12-06 13:37:27.264446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.849 [2024-12-06 13:37:27.264502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.849 [2024-12-06 13:37:27.264513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.849 [2024-12-06 13:37:27.264519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.849 [2024-12-06 13:37:27.264524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.849 [2024-12-06 13:37:27.264534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.849 qpair failed and we were unable to recover it. 00:29:40.849 [2024-12-06 13:37:27.274396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.849 [2024-12-06 13:37:27.274439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.849 [2024-12-06 13:37:27.274449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.849 [2024-12-06 13:37:27.274458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.849 [2024-12-06 13:37:27.274463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.849 [2024-12-06 13:37:27.274473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.849 qpair failed and we were unable to recover it. 00:29:40.849 [2024-12-06 13:37:27.284396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.849 [2024-12-06 13:37:27.284439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.849 [2024-12-06 13:37:27.284449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.849 [2024-12-06 13:37:27.284457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.849 [2024-12-06 13:37:27.284462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.849 [2024-12-06 13:37:27.284473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.849 qpair failed and we were unable to recover it. 00:29:40.849 [2024-12-06 13:37:27.294383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.849 [2024-12-06 13:37:27.294431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.849 [2024-12-06 13:37:27.294442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.849 [2024-12-06 13:37:27.294447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.849 [2024-12-06 13:37:27.294452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.849 [2024-12-06 13:37:27.294466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.849 qpair failed and we were unable to recover it. 00:29:40.849 [2024-12-06 13:37:27.304544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.849 [2024-12-06 13:37:27.304592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.849 [2024-12-06 13:37:27.304602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.849 [2024-12-06 13:37:27.304607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.849 [2024-12-06 13:37:27.304612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.849 [2024-12-06 13:37:27.304622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.849 qpair failed and we were unable to recover it. 00:29:40.849 [2024-12-06 13:37:27.314432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.849 [2024-12-06 13:37:27.314524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.849 [2024-12-06 13:37:27.314534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.849 [2024-12-06 13:37:27.314539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.849 [2024-12-06 13:37:27.314545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.849 [2024-12-06 13:37:27.314555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.849 qpair failed and we were unable to recover it. 00:29:40.849 [2024-12-06 13:37:27.324527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.849 [2024-12-06 13:37:27.324571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.849 [2024-12-06 13:37:27.324582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.849 [2024-12-06 13:37:27.324587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.849 [2024-12-06 13:37:27.324592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.849 [2024-12-06 13:37:27.324603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.849 qpair failed and we were unable to recover it. 00:29:40.849 [2024-12-06 13:37:27.334614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.849 [2024-12-06 13:37:27.334660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.849 [2024-12-06 13:37:27.334674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.849 [2024-12-06 13:37:27.334679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.849 [2024-12-06 13:37:27.334684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.849 [2024-12-06 13:37:27.334694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.849 qpair failed and we were unable to recover it. 00:29:40.849 [2024-12-06 13:37:27.344692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.850 [2024-12-06 13:37:27.344756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.850 [2024-12-06 13:37:27.344766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.850 [2024-12-06 13:37:27.344772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.850 [2024-12-06 13:37:27.344777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.850 [2024-12-06 13:37:27.344787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.850 qpair failed and we were unable to recover it. 00:29:40.850 [2024-12-06 13:37:27.354545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.850 [2024-12-06 13:37:27.354592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.850 [2024-12-06 13:37:27.354602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.850 [2024-12-06 13:37:27.354608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.850 [2024-12-06 13:37:27.354613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.850 [2024-12-06 13:37:27.354623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.850 qpair failed and we were unable to recover it. 00:29:40.850 [2024-12-06 13:37:27.364664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.850 [2024-12-06 13:37:27.364708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.850 [2024-12-06 13:37:27.364718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.850 [2024-12-06 13:37:27.364723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.850 [2024-12-06 13:37:27.364728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.850 [2024-12-06 13:37:27.364738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.850 qpair failed and we were unable to recover it. 00:29:40.850 [2024-12-06 13:37:27.374696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.850 [2024-12-06 13:37:27.374741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.850 [2024-12-06 13:37:27.374751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.850 [2024-12-06 13:37:27.374756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.850 [2024-12-06 13:37:27.374764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.850 [2024-12-06 13:37:27.374774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.850 qpair failed and we were unable to recover it. 00:29:40.850 [2024-12-06 13:37:27.384781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.850 [2024-12-06 13:37:27.384864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.850 [2024-12-06 13:37:27.384874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.850 [2024-12-06 13:37:27.384880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.850 [2024-12-06 13:37:27.384884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.850 [2024-12-06 13:37:27.384894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.850 qpair failed and we were unable to recover it. 00:29:40.850 [2024-12-06 13:37:27.394755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.850 [2024-12-06 13:37:27.394798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.850 [2024-12-06 13:37:27.394808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.850 [2024-12-06 13:37:27.394814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.850 [2024-12-06 13:37:27.394819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.850 [2024-12-06 13:37:27.394829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.850 qpair failed and we were unable to recover it. 00:29:40.850 [2024-12-06 13:37:27.404743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.850 [2024-12-06 13:37:27.404786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.850 [2024-12-06 13:37:27.404797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.850 [2024-12-06 13:37:27.404802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.850 [2024-12-06 13:37:27.404807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.850 [2024-12-06 13:37:27.404817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.850 qpair failed and we were unable to recover it. 00:29:40.850 [2024-12-06 13:37:27.414823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.850 [2024-12-06 13:37:27.414869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.850 [2024-12-06 13:37:27.414879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.850 [2024-12-06 13:37:27.414884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.850 [2024-12-06 13:37:27.414888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.850 [2024-12-06 13:37:27.414899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.850 qpair failed and we were unable to recover it. 00:29:40.850 [2024-12-06 13:37:27.424815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.850 [2024-12-06 13:37:27.424866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.850 [2024-12-06 13:37:27.424876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.850 [2024-12-06 13:37:27.424881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.850 [2024-12-06 13:37:27.424886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.850 [2024-12-06 13:37:27.424896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.850 qpair failed and we were unable to recover it. 00:29:40.850 [2024-12-06 13:37:27.434880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.850 [2024-12-06 13:37:27.434925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.850 [2024-12-06 13:37:27.434935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.850 [2024-12-06 13:37:27.434940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.850 [2024-12-06 13:37:27.434945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.850 [2024-12-06 13:37:27.434955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.850 qpair failed and we were unable to recover it. 00:29:40.850 [2024-12-06 13:37:27.444868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.850 [2024-12-06 13:37:27.444911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.850 [2024-12-06 13:37:27.444921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.851 [2024-12-06 13:37:27.444927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.851 [2024-12-06 13:37:27.444931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.851 [2024-12-06 13:37:27.444941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.851 qpair failed and we were unable to recover it. 00:29:40.851 [2024-12-06 13:37:27.454935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.851 [2024-12-06 13:37:27.454976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.851 [2024-12-06 13:37:27.454986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.851 [2024-12-06 13:37:27.454992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.851 [2024-12-06 13:37:27.454996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.851 [2024-12-06 13:37:27.455006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.851 qpair failed and we were unable to recover it. 00:29:40.851 [2024-12-06 13:37:27.464967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.851 [2024-12-06 13:37:27.465060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.851 [2024-12-06 13:37:27.465073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.851 [2024-12-06 13:37:27.465079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.851 [2024-12-06 13:37:27.465084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.851 [2024-12-06 13:37:27.465094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.851 qpair failed and we were unable to recover it. 00:29:40.851 [2024-12-06 13:37:27.474955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.851 [2024-12-06 13:37:27.474999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.851 [2024-12-06 13:37:27.475010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.851 [2024-12-06 13:37:27.475015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.851 [2024-12-06 13:37:27.475019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.851 [2024-12-06 13:37:27.475029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.851 qpair failed and we were unable to recover it. 00:29:40.851 [2024-12-06 13:37:27.484981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.851 [2024-12-06 13:37:27.485027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.851 [2024-12-06 13:37:27.485036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.851 [2024-12-06 13:37:27.485042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.851 [2024-12-06 13:37:27.485046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.851 [2024-12-06 13:37:27.485057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.851 qpair failed and we were unable to recover it. 00:29:40.851 [2024-12-06 13:37:27.495032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:40.851 [2024-12-06 13:37:27.495079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:40.851 [2024-12-06 13:37:27.495089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:40.851 [2024-12-06 13:37:27.495094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:40.851 [2024-12-06 13:37:27.495099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:40.851 [2024-12-06 13:37:27.495109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.851 qpair failed and we were unable to recover it. 00:29:41.112 [2024-12-06 13:37:27.504987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.112 [2024-12-06 13:37:27.505038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.112 [2024-12-06 13:37:27.505049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.112 [2024-12-06 13:37:27.505055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.112 [2024-12-06 13:37:27.505063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.112 [2024-12-06 13:37:27.505073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.112 qpair failed and we were unable to recover it. 00:29:41.112 [2024-12-06 13:37:27.515059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.112 [2024-12-06 13:37:27.515101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.112 [2024-12-06 13:37:27.515111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.112 [2024-12-06 13:37:27.515117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.112 [2024-12-06 13:37:27.515122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.112 [2024-12-06 13:37:27.515132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.112 qpair failed and we were unable to recover it. 00:29:41.112 [2024-12-06 13:37:27.525085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.112 [2024-12-06 13:37:27.525125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.112 [2024-12-06 13:37:27.525136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.112 [2024-12-06 13:37:27.525141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.112 [2024-12-06 13:37:27.525146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.112 [2024-12-06 13:37:27.525156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.112 qpair failed and we were unable to recover it. 00:29:41.112 [2024-12-06 13:37:27.535148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.112 [2024-12-06 13:37:27.535239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.112 [2024-12-06 13:37:27.535250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.112 [2024-12-06 13:37:27.535256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.112 [2024-12-06 13:37:27.535261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.112 [2024-12-06 13:37:27.535271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.112 qpair failed and we were unable to recover it. 00:29:41.112 [2024-12-06 13:37:27.545192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.112 [2024-12-06 13:37:27.545241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.112 [2024-12-06 13:37:27.545251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.112 [2024-12-06 13:37:27.545256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.112 [2024-12-06 13:37:27.545261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.112 [2024-12-06 13:37:27.545272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.112 qpair failed and we were unable to recover it. 00:29:41.112 [2024-12-06 13:37:27.555227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.112 [2024-12-06 13:37:27.555298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.112 [2024-12-06 13:37:27.555317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.112 [2024-12-06 13:37:27.555324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.112 [2024-12-06 13:37:27.555329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.112 [2024-12-06 13:37:27.555344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.112 qpair failed and we were unable to recover it. 00:29:41.112 [2024-12-06 13:37:27.565189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.112 [2024-12-06 13:37:27.565243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.112 [2024-12-06 13:37:27.565263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.112 [2024-12-06 13:37:27.565269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.112 [2024-12-06 13:37:27.565275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.113 [2024-12-06 13:37:27.565291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.113 qpair failed and we were unable to recover it. 00:29:41.113 [2024-12-06 13:37:27.575251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.113 [2024-12-06 13:37:27.575299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.113 [2024-12-06 13:37:27.575312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.113 [2024-12-06 13:37:27.575317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.113 [2024-12-06 13:37:27.575322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.113 [2024-12-06 13:37:27.575334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.113 qpair failed and we were unable to recover it. 00:29:41.113 [2024-12-06 13:37:27.585256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.113 [2024-12-06 13:37:27.585307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.113 [2024-12-06 13:37:27.585317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.113 [2024-12-06 13:37:27.585323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.113 [2024-12-06 13:37:27.585327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.113 [2024-12-06 13:37:27.585338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.113 qpair failed and we were unable to recover it. 00:29:41.113 [2024-12-06 13:37:27.595283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.113 [2024-12-06 13:37:27.595327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.113 [2024-12-06 13:37:27.595340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.113 [2024-12-06 13:37:27.595346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.113 [2024-12-06 13:37:27.595351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.113 [2024-12-06 13:37:27.595361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.113 qpair failed and we were unable to recover it. 00:29:41.113 [2024-12-06 13:37:27.605298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.113 [2024-12-06 13:37:27.605339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.113 [2024-12-06 13:37:27.605350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.113 [2024-12-06 13:37:27.605355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.113 [2024-12-06 13:37:27.605359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.113 [2024-12-06 13:37:27.605370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.113 qpair failed and we were unable to recover it. 00:29:41.113 [2024-12-06 13:37:27.615358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.113 [2024-12-06 13:37:27.615403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.113 [2024-12-06 13:37:27.615414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.113 [2024-12-06 13:37:27.615420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.113 [2024-12-06 13:37:27.615425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.113 [2024-12-06 13:37:27.615436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.113 qpair failed and we were unable to recover it. 00:29:41.113 [2024-12-06 13:37:27.625387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.113 [2024-12-06 13:37:27.625436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.113 [2024-12-06 13:37:27.625447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.113 [2024-12-06 13:37:27.625452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.113 [2024-12-06 13:37:27.625460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.113 [2024-12-06 13:37:27.625471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.113 qpair failed and we were unable to recover it. 00:29:41.113 [2024-12-06 13:37:27.635322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.113 [2024-12-06 13:37:27.635398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.113 [2024-12-06 13:37:27.635408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.113 [2024-12-06 13:37:27.635414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.113 [2024-12-06 13:37:27.635421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.113 [2024-12-06 13:37:27.635432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.113 qpair failed and we were unable to recover it. 00:29:41.113 [2024-12-06 13:37:27.645380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.113 [2024-12-06 13:37:27.645423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.113 [2024-12-06 13:37:27.645433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.113 [2024-12-06 13:37:27.645439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.113 [2024-12-06 13:37:27.645443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.113 [2024-12-06 13:37:27.645457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.113 qpair failed and we were unable to recover it. 00:29:41.113 [2024-12-06 13:37:27.655476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.113 [2024-12-06 13:37:27.655520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.113 [2024-12-06 13:37:27.655530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.113 [2024-12-06 13:37:27.655536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.113 [2024-12-06 13:37:27.655540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.113 [2024-12-06 13:37:27.655550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.113 qpair failed and we were unable to recover it. 00:29:41.113 [2024-12-06 13:37:27.665444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.113 [2024-12-06 13:37:27.665493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.113 [2024-12-06 13:37:27.665503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.113 [2024-12-06 13:37:27.665508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.113 [2024-12-06 13:37:27.665513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.113 [2024-12-06 13:37:27.665524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.113 qpair failed and we were unable to recover it. 00:29:41.113 [2024-12-06 13:37:27.675480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.113 [2024-12-06 13:37:27.675523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.113 [2024-12-06 13:37:27.675534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.113 [2024-12-06 13:37:27.675539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.113 [2024-12-06 13:37:27.675544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.113 [2024-12-06 13:37:27.675554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.113 qpair failed and we were unable to recover it. 00:29:41.113 [2024-12-06 13:37:27.685493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.113 [2024-12-06 13:37:27.685560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.113 [2024-12-06 13:37:27.685571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.113 [2024-12-06 13:37:27.685577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.113 [2024-12-06 13:37:27.685581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.113 [2024-12-06 13:37:27.685592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.113 qpair failed and we were unable to recover it. 00:29:41.113 [2024-12-06 13:37:27.695551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.113 [2024-12-06 13:37:27.695592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.113 [2024-12-06 13:37:27.695602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.113 [2024-12-06 13:37:27.695608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.113 [2024-12-06 13:37:27.695613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.114 [2024-12-06 13:37:27.695624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.114 qpair failed and we were unable to recover it. 00:29:41.114 [2024-12-06 13:37:27.705548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.114 [2024-12-06 13:37:27.705591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.114 [2024-12-06 13:37:27.705601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.114 [2024-12-06 13:37:27.705606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.114 [2024-12-06 13:37:27.705611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.114 [2024-12-06 13:37:27.705622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.114 qpair failed and we were unable to recover it. 00:29:41.114 [2024-12-06 13:37:27.715591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.114 [2024-12-06 13:37:27.715635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.114 [2024-12-06 13:37:27.715645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.114 [2024-12-06 13:37:27.715650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.114 [2024-12-06 13:37:27.715655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.114 [2024-12-06 13:37:27.715665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.114 qpair failed and we were unable to recover it. 00:29:41.114 [2024-12-06 13:37:27.725613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.114 [2024-12-06 13:37:27.725652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.114 [2024-12-06 13:37:27.725665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.114 [2024-12-06 13:37:27.725670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.114 [2024-12-06 13:37:27.725675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.114 [2024-12-06 13:37:27.725686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.114 qpair failed and we were unable to recover it. 00:29:41.114 [2024-12-06 13:37:27.735627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.114 [2024-12-06 13:37:27.735667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.114 [2024-12-06 13:37:27.735680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.114 [2024-12-06 13:37:27.735686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.114 [2024-12-06 13:37:27.735691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.114 [2024-12-06 13:37:27.735702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.114 qpair failed and we were unable to recover it. 00:29:41.114 [2024-12-06 13:37:27.745709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.114 [2024-12-06 13:37:27.745780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.114 [2024-12-06 13:37:27.745791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.114 [2024-12-06 13:37:27.745796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.114 [2024-12-06 13:37:27.745801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.114 [2024-12-06 13:37:27.745811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.114 qpair failed and we were unable to recover it. 00:29:41.114 [2024-12-06 13:37:27.755690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.114 [2024-12-06 13:37:27.755730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.114 [2024-12-06 13:37:27.755740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.114 [2024-12-06 13:37:27.755746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.114 [2024-12-06 13:37:27.755751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.114 [2024-12-06 13:37:27.755760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.114 qpair failed and we were unable to recover it. 00:29:41.114 [2024-12-06 13:37:27.765688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.114 [2024-12-06 13:37:27.765728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.114 [2024-12-06 13:37:27.765738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.114 [2024-12-06 13:37:27.765744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.114 [2024-12-06 13:37:27.765751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.114 [2024-12-06 13:37:27.765761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.114 qpair failed and we were unable to recover it. 00:29:41.374 [2024-12-06 13:37:27.775760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.374 [2024-12-06 13:37:27.775812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.374 [2024-12-06 13:37:27.775822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.374 [2024-12-06 13:37:27.775828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.374 [2024-12-06 13:37:27.775833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.374 [2024-12-06 13:37:27.775843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-12-06 13:37:27.785768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.374 [2024-12-06 13:37:27.785810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.374 [2024-12-06 13:37:27.785820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.374 [2024-12-06 13:37:27.785825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.375 [2024-12-06 13:37:27.785830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.375 [2024-12-06 13:37:27.785840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-12-06 13:37:27.795788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.375 [2024-12-06 13:37:27.795833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.375 [2024-12-06 13:37:27.795843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.375 [2024-12-06 13:37:27.795848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.375 [2024-12-06 13:37:27.795853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.375 [2024-12-06 13:37:27.795863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-12-06 13:37:27.805678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.375 [2024-12-06 13:37:27.805720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.375 [2024-12-06 13:37:27.805730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.375 [2024-12-06 13:37:27.805736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.375 [2024-12-06 13:37:27.805740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.375 [2024-12-06 13:37:27.805750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-12-06 13:37:27.815838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.375 [2024-12-06 13:37:27.815879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.375 [2024-12-06 13:37:27.815889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.375 [2024-12-06 13:37:27.815894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.375 [2024-12-06 13:37:27.815899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.375 [2024-12-06 13:37:27.815909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-12-06 13:37:27.825869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.375 [2024-12-06 13:37:27.825909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.375 [2024-12-06 13:37:27.825919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.375 [2024-12-06 13:37:27.825925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.375 [2024-12-06 13:37:27.825929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.375 [2024-12-06 13:37:27.825939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-12-06 13:37:27.835901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.375 [2024-12-06 13:37:27.835954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.375 [2024-12-06 13:37:27.835964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.375 [2024-12-06 13:37:27.835970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.375 [2024-12-06 13:37:27.835974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.375 [2024-12-06 13:37:27.835984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-12-06 13:37:27.845931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.375 [2024-12-06 13:37:27.845971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.375 [2024-12-06 13:37:27.845981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.375 [2024-12-06 13:37:27.845987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.375 [2024-12-06 13:37:27.845991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.375 [2024-12-06 13:37:27.846001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-12-06 13:37:27.855980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.375 [2024-12-06 13:37:27.856024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.375 [2024-12-06 13:37:27.856040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.375 [2024-12-06 13:37:27.856046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.375 [2024-12-06 13:37:27.856050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.375 [2024-12-06 13:37:27.856060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-12-06 13:37:27.865988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.375 [2024-12-06 13:37:27.866029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.375 [2024-12-06 13:37:27.866039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.375 [2024-12-06 13:37:27.866045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.375 [2024-12-06 13:37:27.866050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.375 [2024-12-06 13:37:27.866061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-12-06 13:37:27.876009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.375 [2024-12-06 13:37:27.876053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.375 [2024-12-06 13:37:27.876062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.375 [2024-12-06 13:37:27.876068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.375 [2024-12-06 13:37:27.876073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.375 [2024-12-06 13:37:27.876083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-12-06 13:37:27.886041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.375 [2024-12-06 13:37:27.886078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.375 [2024-12-06 13:37:27.886088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.376 [2024-12-06 13:37:27.886093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.376 [2024-12-06 13:37:27.886098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.376 [2024-12-06 13:37:27.886108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-12-06 13:37:27.896056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.376 [2024-12-06 13:37:27.896097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.376 [2024-12-06 13:37:27.896108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.376 [2024-12-06 13:37:27.896114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.376 [2024-12-06 13:37:27.896121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.376 [2024-12-06 13:37:27.896131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-12-06 13:37:27.906115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.376 [2024-12-06 13:37:27.906155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.376 [2024-12-06 13:37:27.906165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.376 [2024-12-06 13:37:27.906170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.376 [2024-12-06 13:37:27.906175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.376 [2024-12-06 13:37:27.906185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-12-06 13:37:27.916111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.376 [2024-12-06 13:37:27.916153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.376 [2024-12-06 13:37:27.916163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.376 [2024-12-06 13:37:27.916169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.376 [2024-12-06 13:37:27.916174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.376 [2024-12-06 13:37:27.916183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-12-06 13:37:27.926150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.376 [2024-12-06 13:37:27.926193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.376 [2024-12-06 13:37:27.926204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.376 [2024-12-06 13:37:27.926209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.376 [2024-12-06 13:37:27.926214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.376 [2024-12-06 13:37:27.926224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-12-06 13:37:27.936255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.376 [2024-12-06 13:37:27.936303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.376 [2024-12-06 13:37:27.936322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.376 [2024-12-06 13:37:27.936329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.376 [2024-12-06 13:37:27.936334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.376 [2024-12-06 13:37:27.936348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-12-06 13:37:27.946216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.376 [2024-12-06 13:37:27.946262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.376 [2024-12-06 13:37:27.946281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.376 [2024-12-06 13:37:27.946288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.376 [2024-12-06 13:37:27.946293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.376 [2024-12-06 13:37:27.946307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-12-06 13:37:27.956245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.376 [2024-12-06 13:37:27.956291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.376 [2024-12-06 13:37:27.956311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.376 [2024-12-06 13:37:27.956317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.376 [2024-12-06 13:37:27.956323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.376 [2024-12-06 13:37:27.956337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-12-06 13:37:27.966259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.376 [2024-12-06 13:37:27.966337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.376 [2024-12-06 13:37:27.966348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.376 [2024-12-06 13:37:27.966354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.376 [2024-12-06 13:37:27.966358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.376 [2024-12-06 13:37:27.966370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-12-06 13:37:27.976321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.376 [2024-12-06 13:37:27.976363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.376 [2024-12-06 13:37:27.976373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.376 [2024-12-06 13:37:27.976378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.376 [2024-12-06 13:37:27.976383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.376 [2024-12-06 13:37:27.976393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-12-06 13:37:27.986311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.376 [2024-12-06 13:37:27.986353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.376 [2024-12-06 13:37:27.986367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.376 [2024-12-06 13:37:27.986372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.376 [2024-12-06 13:37:27.986377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.376 [2024-12-06 13:37:27.986387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-12-06 13:37:27.996356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.376 [2024-12-06 13:37:27.996401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.376 [2024-12-06 13:37:27.996412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.376 [2024-12-06 13:37:27.996417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.376 [2024-12-06 13:37:27.996422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.376 [2024-12-06 13:37:27.996432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-12-06 13:37:28.006378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.376 [2024-12-06 13:37:28.006436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.376 [2024-12-06 13:37:28.006446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.376 [2024-12-06 13:37:28.006452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.376 [2024-12-06 13:37:28.006459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.377 [2024-12-06 13:37:28.006470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-12-06 13:37:28.016385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.377 [2024-12-06 13:37:28.016433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.377 [2024-12-06 13:37:28.016444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.377 [2024-12-06 13:37:28.016449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.377 [2024-12-06 13:37:28.016462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.377 [2024-12-06 13:37:28.016473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-12-06 13:37:28.026429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.377 [2024-12-06 13:37:28.026480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.377 [2024-12-06 13:37:28.026490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.377 [2024-12-06 13:37:28.026496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.377 [2024-12-06 13:37:28.026503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.377 [2024-12-06 13:37:28.026513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.637 [2024-12-06 13:37:28.036467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.637 [2024-12-06 13:37:28.036508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.637 [2024-12-06 13:37:28.036518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.637 [2024-12-06 13:37:28.036523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.637 [2024-12-06 13:37:28.036528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.637 [2024-12-06 13:37:28.036538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.637 qpair failed and we were unable to recover it. 00:29:41.637 [2024-12-06 13:37:28.046477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.637 [2024-12-06 13:37:28.046514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.637 [2024-12-06 13:37:28.046525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.637 [2024-12-06 13:37:28.046530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.637 [2024-12-06 13:37:28.046534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.637 [2024-12-06 13:37:28.046544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.637 qpair failed and we were unable to recover it. 00:29:41.637 [2024-12-06 13:37:28.056546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.637 [2024-12-06 13:37:28.056586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.637 [2024-12-06 13:37:28.056596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.637 [2024-12-06 13:37:28.056601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.637 [2024-12-06 13:37:28.056606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.637 [2024-12-06 13:37:28.056616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.637 qpair failed and we were unable to recover it. 00:29:41.637 [2024-12-06 13:37:28.066542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.637 [2024-12-06 13:37:28.066582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.637 [2024-12-06 13:37:28.066592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.637 [2024-12-06 13:37:28.066598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.637 [2024-12-06 13:37:28.066603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.637 [2024-12-06 13:37:28.066613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.637 qpair failed and we were unable to recover it. 00:29:41.637 [2024-12-06 13:37:28.076646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.637 [2024-12-06 13:37:28.076696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.637 [2024-12-06 13:37:28.076706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.637 [2024-12-06 13:37:28.076712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.637 [2024-12-06 13:37:28.076716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.637 [2024-12-06 13:37:28.076726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.637 qpair failed and we were unable to recover it. 00:29:41.637 [2024-12-06 13:37:28.086615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.637 [2024-12-06 13:37:28.086699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.637 [2024-12-06 13:37:28.086709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.637 [2024-12-06 13:37:28.086715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.637 [2024-12-06 13:37:28.086720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.637 [2024-12-06 13:37:28.086730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.637 qpair failed and we were unable to recover it. 00:29:41.638 [2024-12-06 13:37:28.096709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.638 [2024-12-06 13:37:28.096757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.638 [2024-12-06 13:37:28.096767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.638 [2024-12-06 13:37:28.096773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.638 [2024-12-06 13:37:28.096777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.638 [2024-12-06 13:37:28.096787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.638 qpair failed and we were unable to recover it. 00:29:41.638 [2024-12-06 13:37:28.106691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.638 [2024-12-06 13:37:28.106735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.638 [2024-12-06 13:37:28.106745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.638 [2024-12-06 13:37:28.106751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.638 [2024-12-06 13:37:28.106755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.638 [2024-12-06 13:37:28.106766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.638 qpair failed and we were unable to recover it. 00:29:41.638 [2024-12-06 13:37:28.116664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.638 [2024-12-06 13:37:28.116708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.638 [2024-12-06 13:37:28.116722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.638 [2024-12-06 13:37:28.116728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.638 [2024-12-06 13:37:28.116733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.638 [2024-12-06 13:37:28.116744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.638 qpair failed and we were unable to recover it. 00:29:41.638 [2024-12-06 13:37:28.126690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.638 [2024-12-06 13:37:28.126734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.638 [2024-12-06 13:37:28.126744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.638 [2024-12-06 13:37:28.126749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.638 [2024-12-06 13:37:28.126754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.638 [2024-12-06 13:37:28.126764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.638 qpair failed and we were unable to recover it. 00:29:41.638 [2024-12-06 13:37:28.136769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.638 [2024-12-06 13:37:28.136817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.638 [2024-12-06 13:37:28.136827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.638 [2024-12-06 13:37:28.136832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.638 [2024-12-06 13:37:28.136837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.638 [2024-12-06 13:37:28.136847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.638 qpair failed and we were unable to recover it. 00:29:41.638 [2024-12-06 13:37:28.146723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.638 [2024-12-06 13:37:28.146766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.638 [2024-12-06 13:37:28.146776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.638 [2024-12-06 13:37:28.146782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.638 [2024-12-06 13:37:28.146786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.638 [2024-12-06 13:37:28.146796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.638 qpair failed and we were unable to recover it. 00:29:41.638 [2024-12-06 13:37:28.156793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.638 [2024-12-06 13:37:28.156835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.638 [2024-12-06 13:37:28.156845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.638 [2024-12-06 13:37:28.156850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.638 [2024-12-06 13:37:28.156859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.638 [2024-12-06 13:37:28.156869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.638 qpair failed and we were unable to recover it. 00:29:41.638 [2024-12-06 13:37:28.166809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.638 [2024-12-06 13:37:28.166849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.638 [2024-12-06 13:37:28.166859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.638 [2024-12-06 13:37:28.166864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.638 [2024-12-06 13:37:28.166869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.638 [2024-12-06 13:37:28.166879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.638 qpair failed and we were unable to recover it. 00:29:41.638 [2024-12-06 13:37:28.176782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.638 [2024-12-06 13:37:28.176833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.638 [2024-12-06 13:37:28.176844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.638 [2024-12-06 13:37:28.176849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.638 [2024-12-06 13:37:28.176853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.638 [2024-12-06 13:37:28.176864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.638 qpair failed and we were unable to recover it. 00:29:41.638 [2024-12-06 13:37:28.186871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.638 [2024-12-06 13:37:28.186929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.638 [2024-12-06 13:37:28.186940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.638 [2024-12-06 13:37:28.186945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.638 [2024-12-06 13:37:28.186950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.638 [2024-12-06 13:37:28.186961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.638 qpair failed and we were unable to recover it. 00:29:41.638 [2024-12-06 13:37:28.196903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.639 [2024-12-06 13:37:28.196947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.639 [2024-12-06 13:37:28.196958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.639 [2024-12-06 13:37:28.196963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.639 [2024-12-06 13:37:28.196968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.639 [2024-12-06 13:37:28.196978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.639 qpair failed and we were unable to recover it. 00:29:41.639 [2024-12-06 13:37:28.206925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.639 [2024-12-06 13:37:28.206966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.639 [2024-12-06 13:37:28.206976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.639 [2024-12-06 13:37:28.206981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.639 [2024-12-06 13:37:28.206986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.639 [2024-12-06 13:37:28.206996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.639 qpair failed and we were unable to recover it. 00:29:41.639 [2024-12-06 13:37:28.216970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.639 [2024-12-06 13:37:28.217012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.639 [2024-12-06 13:37:28.217023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.639 [2024-12-06 13:37:28.217028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.639 [2024-12-06 13:37:28.217033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.639 [2024-12-06 13:37:28.217043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.639 qpair failed and we were unable to recover it. 00:29:41.639 [2024-12-06 13:37:28.226971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.639 [2024-12-06 13:37:28.227013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.639 [2024-12-06 13:37:28.227023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.639 [2024-12-06 13:37:28.227028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.639 [2024-12-06 13:37:28.227033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.639 [2024-12-06 13:37:28.227043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.639 qpair failed and we were unable to recover it. 00:29:41.639 [2024-12-06 13:37:28.237008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.639 [2024-12-06 13:37:28.237047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.639 [2024-12-06 13:37:28.237057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.639 [2024-12-06 13:37:28.237062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.639 [2024-12-06 13:37:28.237067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.639 [2024-12-06 13:37:28.237077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.639 qpair failed and we were unable to recover it. 00:29:41.639 [2024-12-06 13:37:28.247030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.639 [2024-12-06 13:37:28.247073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.639 [2024-12-06 13:37:28.247086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.639 [2024-12-06 13:37:28.247091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.639 [2024-12-06 13:37:28.247096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.639 [2024-12-06 13:37:28.247106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.639 qpair failed and we were unable to recover it. 00:29:41.639 [2024-12-06 13:37:28.257096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.639 [2024-12-06 13:37:28.257143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.639 [2024-12-06 13:37:28.257153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.639 [2024-12-06 13:37:28.257158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.639 [2024-12-06 13:37:28.257162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.639 [2024-12-06 13:37:28.257173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.639 qpair failed and we were unable to recover it. 00:29:41.639 [2024-12-06 13:37:28.267085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.639 [2024-12-06 13:37:28.267129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.639 [2024-12-06 13:37:28.267139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.639 [2024-12-06 13:37:28.267144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.639 [2024-12-06 13:37:28.267149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.639 [2024-12-06 13:37:28.267159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.639 qpair failed and we were unable to recover it. 00:29:41.639 [2024-12-06 13:37:28.277040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.639 [2024-12-06 13:37:28.277101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.639 [2024-12-06 13:37:28.277112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.639 [2024-12-06 13:37:28.277117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.639 [2024-12-06 13:37:28.277122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.639 [2024-12-06 13:37:28.277132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.639 qpair failed and we were unable to recover it. 00:29:41.639 [2024-12-06 13:37:28.287127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.639 [2024-12-06 13:37:28.287164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.639 [2024-12-06 13:37:28.287174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.639 [2024-12-06 13:37:28.287182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.639 [2024-12-06 13:37:28.287187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.639 [2024-12-06 13:37:28.287197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.639 qpair failed and we were unable to recover it. 00:29:41.900 [2024-12-06 13:37:28.297176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.900 [2024-12-06 13:37:28.297228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.900 [2024-12-06 13:37:28.297247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.900 [2024-12-06 13:37:28.297253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.900 [2024-12-06 13:37:28.297258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.900 [2024-12-06 13:37:28.297273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-12-06 13:37:28.307150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.900 [2024-12-06 13:37:28.307197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.900 [2024-12-06 13:37:28.307217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.900 [2024-12-06 13:37:28.307223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.900 [2024-12-06 13:37:28.307228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.900 [2024-12-06 13:37:28.307242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-12-06 13:37:28.317218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.900 [2024-12-06 13:37:28.317265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.900 [2024-12-06 13:37:28.317285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.900 [2024-12-06 13:37:28.317291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.900 [2024-12-06 13:37:28.317296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.900 [2024-12-06 13:37:28.317310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-12-06 13:37:28.327234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.900 [2024-12-06 13:37:28.327284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.900 [2024-12-06 13:37:28.327296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.900 [2024-12-06 13:37:28.327301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.900 [2024-12-06 13:37:28.327305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.900 [2024-12-06 13:37:28.327317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-12-06 13:37:28.337299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.900 [2024-12-06 13:37:28.337346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.900 [2024-12-06 13:37:28.337357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.900 [2024-12-06 13:37:28.337363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.900 [2024-12-06 13:37:28.337367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.900 [2024-12-06 13:37:28.337378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-12-06 13:37:28.347316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.900 [2024-12-06 13:37:28.347359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.900 [2024-12-06 13:37:28.347369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.900 [2024-12-06 13:37:28.347374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.900 [2024-12-06 13:37:28.347379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.900 [2024-12-06 13:37:28.347390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-12-06 13:37:28.357326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.900 [2024-12-06 13:37:28.357407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.900 [2024-12-06 13:37:28.357417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.900 [2024-12-06 13:37:28.357422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.900 [2024-12-06 13:37:28.357427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.900 [2024-12-06 13:37:28.357438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-12-06 13:37:28.367328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.900 [2024-12-06 13:37:28.367371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.900 [2024-12-06 13:37:28.367381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.900 [2024-12-06 13:37:28.367386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.900 [2024-12-06 13:37:28.367391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.900 [2024-12-06 13:37:28.367401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.900 qpair failed and we were unable to recover it. 00:29:41.900 [2024-12-06 13:37:28.377392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.377437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.377450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.377459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.377464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.901 [2024-12-06 13:37:28.377474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-12-06 13:37:28.387401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.387442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.387452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.387460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.387464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.901 [2024-12-06 13:37:28.387475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-12-06 13:37:28.397438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.397517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.397527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.397532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.397537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.901 [2024-12-06 13:37:28.397547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-12-06 13:37:28.407450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.407493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.407503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.407508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.407513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.901 [2024-12-06 13:37:28.407523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-12-06 13:37:28.417520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.417614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.417626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.417634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.417639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.901 [2024-12-06 13:37:28.417649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-12-06 13:37:28.427531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.427573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.427583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.427588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.427593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.901 [2024-12-06 13:37:28.427604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-12-06 13:37:28.437548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.437589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.437599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.437605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.437609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.901 [2024-12-06 13:37:28.437619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-12-06 13:37:28.447559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.447601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.447611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.447617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.447621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.901 [2024-12-06 13:37:28.447631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-12-06 13:37:28.457628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.457678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.457689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.457694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.457699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.901 [2024-12-06 13:37:28.457709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-12-06 13:37:28.467493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.467536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.467546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.467551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.467556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.901 [2024-12-06 13:37:28.467566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-12-06 13:37:28.477629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.477668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.477678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.477683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.477688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.901 [2024-12-06 13:37:28.477698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-12-06 13:37:28.487672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.487748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.487758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.487763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.487767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.901 [2024-12-06 13:37:28.487777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-12-06 13:37:28.497751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.497791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.497801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.497806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.497811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.901 [2024-12-06 13:37:28.497821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.901 qpair failed and we were unable to recover it. 00:29:41.901 [2024-12-06 13:37:28.507700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.901 [2024-12-06 13:37:28.507742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.901 [2024-12-06 13:37:28.507758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.901 [2024-12-06 13:37:28.507764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.901 [2024-12-06 13:37:28.507768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.902 [2024-12-06 13:37:28.507778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-12-06 13:37:28.517737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.902 [2024-12-06 13:37:28.517776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.902 [2024-12-06 13:37:28.517787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.902 [2024-12-06 13:37:28.517792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.902 [2024-12-06 13:37:28.517797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.902 [2024-12-06 13:37:28.517806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-12-06 13:37:28.527781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.902 [2024-12-06 13:37:28.527822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.902 [2024-12-06 13:37:28.527832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.902 [2024-12-06 13:37:28.527838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.902 [2024-12-06 13:37:28.527842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.902 [2024-12-06 13:37:28.527852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-12-06 13:37:28.537842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.902 [2024-12-06 13:37:28.537882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.902 [2024-12-06 13:37:28.537892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.902 [2024-12-06 13:37:28.537897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.902 [2024-12-06 13:37:28.537901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.902 [2024-12-06 13:37:28.537911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.902 qpair failed and we were unable to recover it. 00:29:41.902 [2024-12-06 13:37:28.547837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:41.902 [2024-12-06 13:37:28.547918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:41.902 [2024-12-06 13:37:28.547928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:41.902 [2024-12-06 13:37:28.547936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:41.902 [2024-12-06 13:37:28.547941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:41.902 [2024-12-06 13:37:28.547951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:41.902 qpair failed and we were unable to recover it. 00:29:42.160 [2024-12-06 13:37:28.557868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.161 [2024-12-06 13:37:28.557909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.161 [2024-12-06 13:37:28.557919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.161 [2024-12-06 13:37:28.557924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.161 [2024-12-06 13:37:28.557929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.161 [2024-12-06 13:37:28.557939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.161 qpair failed and we were unable to recover it. 00:29:42.161 [2024-12-06 13:37:28.567885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.161 [2024-12-06 13:37:28.567928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.161 [2024-12-06 13:37:28.567938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.161 [2024-12-06 13:37:28.567943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.161 [2024-12-06 13:37:28.567948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.161 [2024-12-06 13:37:28.567958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.161 qpair failed and we were unable to recover it. 00:29:42.161 [2024-12-06 13:37:28.577949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.161 [2024-12-06 13:37:28.578005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.161 [2024-12-06 13:37:28.578015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.161 [2024-12-06 13:37:28.578020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.161 [2024-12-06 13:37:28.578025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.161 [2024-12-06 13:37:28.578035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.161 qpair failed and we were unable to recover it. 00:29:42.161 [2024-12-06 13:37:28.587943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.161 [2024-12-06 13:37:28.587984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.161 [2024-12-06 13:37:28.587994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.161 [2024-12-06 13:37:28.587999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.161 [2024-12-06 13:37:28.588004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.161 [2024-12-06 13:37:28.588014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.161 qpair failed and we were unable to recover it. 00:29:42.161 [2024-12-06 13:37:28.598000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.161 [2024-12-06 13:37:28.598043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.161 [2024-12-06 13:37:28.598053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.161 [2024-12-06 13:37:28.598058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.161 [2024-12-06 13:37:28.598063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.161 [2024-12-06 13:37:28.598073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.161 qpair failed and we were unable to recover it. 00:29:42.161 [2024-12-06 13:37:28.607995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.161 [2024-12-06 13:37:28.608032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.161 [2024-12-06 13:37:28.608042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.161 [2024-12-06 13:37:28.608047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.161 [2024-12-06 13:37:28.608052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.161 [2024-12-06 13:37:28.608062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.161 qpair failed and we were unable to recover it. 00:29:42.161 [2024-12-06 13:37:28.618063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.161 [2024-12-06 13:37:28.618149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.161 [2024-12-06 13:37:28.618159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.161 [2024-12-06 13:37:28.618165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.161 [2024-12-06 13:37:28.618170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.161 [2024-12-06 13:37:28.618180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.161 qpair failed and we were unable to recover it. 00:29:42.161 [2024-12-06 13:37:28.628056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.161 [2024-12-06 13:37:28.628098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.161 [2024-12-06 13:37:28.628108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.161 [2024-12-06 13:37:28.628113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.161 [2024-12-06 13:37:28.628118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.161 [2024-12-06 13:37:28.628127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.161 qpair failed and we were unable to recover it. 00:29:42.161 [2024-12-06 13:37:28.637956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.161 [2024-12-06 13:37:28.637996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.161 [2024-12-06 13:37:28.638009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.161 [2024-12-06 13:37:28.638014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.161 [2024-12-06 13:37:28.638019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.161 [2024-12-06 13:37:28.638029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.161 qpair failed and we were unable to recover it. 00:29:42.161 [2024-12-06 13:37:28.648083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.161 [2024-12-06 13:37:28.648124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.161 [2024-12-06 13:37:28.648134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.161 [2024-12-06 13:37:28.648139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.161 [2024-12-06 13:37:28.648144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.161 [2024-12-06 13:37:28.648154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.161 qpair failed and we were unable to recover it. 00:29:42.161 [2024-12-06 13:37:28.658152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.161 [2024-12-06 13:37:28.658193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.161 [2024-12-06 13:37:28.658203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.161 [2024-12-06 13:37:28.658208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.161 [2024-12-06 13:37:28.658213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.161 [2024-12-06 13:37:28.658223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.161 qpair failed and we were unable to recover it. 00:29:42.161 [2024-12-06 13:37:28.668144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.161 [2024-12-06 13:37:28.668201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.161 [2024-12-06 13:37:28.668220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.161 [2024-12-06 13:37:28.668226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.161 [2024-12-06 13:37:28.668232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.161 [2024-12-06 13:37:28.668246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.161 qpair failed and we were unable to recover it. 00:29:42.161 [2024-12-06 13:37:28.678197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.161 [2024-12-06 13:37:28.678291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.161 [2024-12-06 13:37:28.678306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.161 [2024-12-06 13:37:28.678315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.161 [2024-12-06 13:37:28.678320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.161 [2024-12-06 13:37:28.678332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.161 qpair failed and we were unable to recover it. 00:29:42.162 [2024-12-06 13:37:28.688197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.162 [2024-12-06 13:37:28.688238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.162 [2024-12-06 13:37:28.688249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.162 [2024-12-06 13:37:28.688255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.162 [2024-12-06 13:37:28.688259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.162 [2024-12-06 13:37:28.688270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.162 qpair failed and we were unable to recover it. 00:29:42.162 [2024-12-06 13:37:28.698310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.162 [2024-12-06 13:37:28.698352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.162 [2024-12-06 13:37:28.698362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.162 [2024-12-06 13:37:28.698368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.162 [2024-12-06 13:37:28.698373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.162 [2024-12-06 13:37:28.698383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.162 qpair failed and we were unable to recover it. 00:29:42.162 [2024-12-06 13:37:28.708267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.162 [2024-12-06 13:37:28.708347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.162 [2024-12-06 13:37:28.708358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.162 [2024-12-06 13:37:28.708364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.162 [2024-12-06 13:37:28.708369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.162 [2024-12-06 13:37:28.708379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.162 qpair failed and we were unable to recover it. 00:29:42.162 [2024-12-06 13:37:28.718267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.162 [2024-12-06 13:37:28.718309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.162 [2024-12-06 13:37:28.718319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.162 [2024-12-06 13:37:28.718325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.162 [2024-12-06 13:37:28.718329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.162 [2024-12-06 13:37:28.718339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.162 qpair failed and we were unable to recover it. 00:29:42.162 [2024-12-06 13:37:28.728329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.162 [2024-12-06 13:37:28.728370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.162 [2024-12-06 13:37:28.728379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.162 [2024-12-06 13:37:28.728384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.162 [2024-12-06 13:37:28.728389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.162 [2024-12-06 13:37:28.728399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.162 qpair failed and we were unable to recover it. 00:29:42.162 [2024-12-06 13:37:28.738396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.162 [2024-12-06 13:37:28.738439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.162 [2024-12-06 13:37:28.738449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.162 [2024-12-06 13:37:28.738457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.162 [2024-12-06 13:37:28.738462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.162 [2024-12-06 13:37:28.738472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.162 qpair failed and we were unable to recover it. 00:29:42.162 [2024-12-06 13:37:28.748376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.162 [2024-12-06 13:37:28.748419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.162 [2024-12-06 13:37:28.748429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.162 [2024-12-06 13:37:28.748434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.162 [2024-12-06 13:37:28.748439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.162 [2024-12-06 13:37:28.748449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.162 qpair failed and we were unable to recover it. 00:29:42.162 [2024-12-06 13:37:28.758418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.162 [2024-12-06 13:37:28.758467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.162 [2024-12-06 13:37:28.758477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.162 [2024-12-06 13:37:28.758482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.162 [2024-12-06 13:37:28.758486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.162 [2024-12-06 13:37:28.758497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.162 qpair failed and we were unable to recover it. 00:29:42.162 [2024-12-06 13:37:28.768452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.162 [2024-12-06 13:37:28.768496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.162 [2024-12-06 13:37:28.768506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.162 [2024-12-06 13:37:28.768511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.162 [2024-12-06 13:37:28.768516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.162 [2024-12-06 13:37:28.768526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.162 qpair failed and we were unable to recover it. 00:29:42.162 [2024-12-06 13:37:28.778486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.162 [2024-12-06 13:37:28.778533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.162 [2024-12-06 13:37:28.778543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.162 [2024-12-06 13:37:28.778548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.162 [2024-12-06 13:37:28.778553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.162 [2024-12-06 13:37:28.778563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.162 qpair failed and we were unable to recover it. 00:29:42.162 [2024-12-06 13:37:28.788497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.162 [2024-12-06 13:37:28.788538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.162 [2024-12-06 13:37:28.788548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.162 [2024-12-06 13:37:28.788553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.162 [2024-12-06 13:37:28.788558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.162 [2024-12-06 13:37:28.788568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.162 qpair failed and we were unable to recover it. 00:29:42.162 [2024-12-06 13:37:28.798531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.162 [2024-12-06 13:37:28.798617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.162 [2024-12-06 13:37:28.798627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.162 [2024-12-06 13:37:28.798632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.162 [2024-12-06 13:37:28.798638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.162 [2024-12-06 13:37:28.798648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.162 qpair failed and we were unable to recover it. 00:29:42.162 [2024-12-06 13:37:28.808537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.162 [2024-12-06 13:37:28.808585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.162 [2024-12-06 13:37:28.808595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.162 [2024-12-06 13:37:28.808603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.162 [2024-12-06 13:37:28.808608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.162 [2024-12-06 13:37:28.808619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.162 qpair failed and we were unable to recover it. 00:29:42.423 [2024-12-06 13:37:28.818563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.423 [2024-12-06 13:37:28.818614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.423 [2024-12-06 13:37:28.818626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.423 [2024-12-06 13:37:28.818632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.423 [2024-12-06 13:37:28.818637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.423 [2024-12-06 13:37:28.818648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.423 qpair failed and we were unable to recover it. 00:29:42.423 [2024-12-06 13:37:28.828546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.423 [2024-12-06 13:37:28.828626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.423 [2024-12-06 13:37:28.828637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.423 [2024-12-06 13:37:28.828642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.423 [2024-12-06 13:37:28.828647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.423 [2024-12-06 13:37:28.828658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.423 qpair failed and we were unable to recover it. 00:29:42.423 [2024-12-06 13:37:28.838615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.423 [2024-12-06 13:37:28.838658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.423 [2024-12-06 13:37:28.838668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.423 [2024-12-06 13:37:28.838674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.423 [2024-12-06 13:37:28.838679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.423 [2024-12-06 13:37:28.838689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.423 qpair failed and we were unable to recover it. 00:29:42.423 [2024-12-06 13:37:28.848665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.423 [2024-12-06 13:37:28.848709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.423 [2024-12-06 13:37:28.848719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.423 [2024-12-06 13:37:28.848724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.423 [2024-12-06 13:37:28.848729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.423 [2024-12-06 13:37:28.848739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.423 qpair failed and we were unable to recover it. 00:29:42.423 [2024-12-06 13:37:28.858717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.423 [2024-12-06 13:37:28.858770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.423 [2024-12-06 13:37:28.858780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.423 [2024-12-06 13:37:28.858785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.423 [2024-12-06 13:37:28.858790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.423 [2024-12-06 13:37:28.858800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.423 qpair failed and we were unable to recover it. 00:29:42.423 [2024-12-06 13:37:28.868714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.423 [2024-12-06 13:37:28.868758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.423 [2024-12-06 13:37:28.868768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.423 [2024-12-06 13:37:28.868774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.423 [2024-12-06 13:37:28.868780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.423 [2024-12-06 13:37:28.868790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.423 qpair failed and we were unable to recover it. 00:29:42.423 [2024-12-06 13:37:28.878749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.423 [2024-12-06 13:37:28.878791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.423 [2024-12-06 13:37:28.878801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.423 [2024-12-06 13:37:28.878807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.423 [2024-12-06 13:37:28.878812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.423 [2024-12-06 13:37:28.878822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.423 qpair failed and we were unable to recover it. 00:29:42.423 [2024-12-06 13:37:28.888734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.423 [2024-12-06 13:37:28.888772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.423 [2024-12-06 13:37:28.888782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.423 [2024-12-06 13:37:28.888787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.423 [2024-12-06 13:37:28.888792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.423 [2024-12-06 13:37:28.888802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.423 qpair failed and we were unable to recover it. 00:29:42.423 [2024-12-06 13:37:28.898834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.423 [2024-12-06 13:37:28.898881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.423 [2024-12-06 13:37:28.898892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.423 [2024-12-06 13:37:28.898897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.423 [2024-12-06 13:37:28.898902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.423 [2024-12-06 13:37:28.898912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.423 qpair failed and we were unable to recover it. 00:29:42.423 [2024-12-06 13:37:28.908823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.423 [2024-12-06 13:37:28.908866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.423 [2024-12-06 13:37:28.908876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.423 [2024-12-06 13:37:28.908881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.423 [2024-12-06 13:37:28.908886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.423 [2024-12-06 13:37:28.908896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.423 qpair failed and we were unable to recover it. 00:29:42.423 [2024-12-06 13:37:28.918848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.423 [2024-12-06 13:37:28.918901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.423 [2024-12-06 13:37:28.918911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.423 [2024-12-06 13:37:28.918917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.423 [2024-12-06 13:37:28.918924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.423 [2024-12-06 13:37:28.918934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.423 qpair failed and we were unable to recover it. 00:29:42.423 [2024-12-06 13:37:28.928915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:28.928965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:28.928976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:28.928981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:28.928986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:28.928996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:28.938937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:28.938982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:28.938993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:28.939001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:28.939006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:28.939016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:28.948954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:28.948996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:28.949006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:28.949011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:28.949016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:28.949026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:28.958960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:28.959002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:28.959012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:28.959018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:28.959022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:28.959033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:28.969022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:28.969062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:28.969072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:28.969077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:28.969082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:28.969092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:28.979037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:28.979116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:28.979126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:28.979131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:28.979136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:28.979146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:28.989040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:28.989083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:28.989094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:28.989099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:28.989104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:28.989114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:28.999076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:28.999123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:28.999133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:28.999139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:28.999144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:28.999154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:29.009080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:29.009118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:29.009128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:29.009134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:29.009138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:29.009149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:29.019202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:29.019268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:29.019279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:29.019285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:29.019290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:29.019300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:29.029162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:29.029223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:29.029242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:29.029249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:29.029254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:29.029269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:29.039200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:29.039245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:29.039265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:29.039271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:29.039276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:29.039291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:29.049159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:29.049200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:29.049211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:29.049217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:29.049222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:29.049233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:29.059255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:29.059300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:29.059310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:29.059315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:29.059320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:29.059330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.424 [2024-12-06 13:37:29.069260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.424 [2024-12-06 13:37:29.069303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.424 [2024-12-06 13:37:29.069314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.424 [2024-12-06 13:37:29.069323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.424 [2024-12-06 13:37:29.069328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.424 [2024-12-06 13:37:29.069339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.424 qpair failed and we were unable to recover it. 00:29:42.685 [2024-12-06 13:37:29.079291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.685 [2024-12-06 13:37:29.079332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.685 [2024-12-06 13:37:29.079343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.685 [2024-12-06 13:37:29.079348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.685 [2024-12-06 13:37:29.079353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.685 [2024-12-06 13:37:29.079363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-12-06 13:37:29.089304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.685 [2024-12-06 13:37:29.089344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.685 [2024-12-06 13:37:29.089354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.685 [2024-12-06 13:37:29.089359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.685 [2024-12-06 13:37:29.089364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.685 [2024-12-06 13:37:29.089374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-12-06 13:37:29.099360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.685 [2024-12-06 13:37:29.099450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.685 [2024-12-06 13:37:29.099463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.685 [2024-12-06 13:37:29.099469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.685 [2024-12-06 13:37:29.099474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.685 [2024-12-06 13:37:29.099484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-12-06 13:37:29.109398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.685 [2024-12-06 13:37:29.109470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.685 [2024-12-06 13:37:29.109481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.685 [2024-12-06 13:37:29.109486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.685 [2024-12-06 13:37:29.109491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.685 [2024-12-06 13:37:29.109504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-12-06 13:37:29.119372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.685 [2024-12-06 13:37:29.119420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.685 [2024-12-06 13:37:29.119431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.685 [2024-12-06 13:37:29.119437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.685 [2024-12-06 13:37:29.119442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.685 [2024-12-06 13:37:29.119452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-12-06 13:37:29.129403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.685 [2024-12-06 13:37:29.129464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.685 [2024-12-06 13:37:29.129475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.685 [2024-12-06 13:37:29.129480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.685 [2024-12-06 13:37:29.129485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.685 [2024-12-06 13:37:29.129495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-12-06 13:37:29.139506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.685 [2024-12-06 13:37:29.139550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.685 [2024-12-06 13:37:29.139560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.685 [2024-12-06 13:37:29.139565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.685 [2024-12-06 13:37:29.139570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.685 [2024-12-06 13:37:29.139580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-12-06 13:37:29.149452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.685 [2024-12-06 13:37:29.149501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.685 [2024-12-06 13:37:29.149513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.685 [2024-12-06 13:37:29.149518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.685 [2024-12-06 13:37:29.149523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.685 [2024-12-06 13:37:29.149534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-12-06 13:37:29.159500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.685 [2024-12-06 13:37:29.159598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.685 [2024-12-06 13:37:29.159609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.685 [2024-12-06 13:37:29.159615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.685 [2024-12-06 13:37:29.159619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.159630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.686 qpair failed and we were unable to recover it. 00:29:42.686 [2024-12-06 13:37:29.169484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.686 [2024-12-06 13:37:29.169524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.686 [2024-12-06 13:37:29.169534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.686 [2024-12-06 13:37:29.169539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.686 [2024-12-06 13:37:29.169544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.169554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.686 qpair failed and we were unable to recover it. 00:29:42.686 [2024-12-06 13:37:29.179553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.686 [2024-12-06 13:37:29.179610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.686 [2024-12-06 13:37:29.179620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.686 [2024-12-06 13:37:29.179625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.686 [2024-12-06 13:37:29.179630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.179640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.686 qpair failed and we were unable to recover it. 00:29:42.686 [2024-12-06 13:37:29.189553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.686 [2024-12-06 13:37:29.189593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.686 [2024-12-06 13:37:29.189605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.686 [2024-12-06 13:37:29.189610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.686 [2024-12-06 13:37:29.189615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.189626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.686 qpair failed and we were unable to recover it. 00:29:42.686 [2024-12-06 13:37:29.199589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.686 [2024-12-06 13:37:29.199640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.686 [2024-12-06 13:37:29.199650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.686 [2024-12-06 13:37:29.199658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.686 [2024-12-06 13:37:29.199663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.199673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.686 qpair failed and we were unable to recover it. 00:29:42.686 [2024-12-06 13:37:29.209624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.686 [2024-12-06 13:37:29.209665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.686 [2024-12-06 13:37:29.209675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.686 [2024-12-06 13:37:29.209681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.686 [2024-12-06 13:37:29.209686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.209696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.686 qpair failed and we were unable to recover it. 00:29:42.686 [2024-12-06 13:37:29.219657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.686 [2024-12-06 13:37:29.219702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.686 [2024-12-06 13:37:29.219712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.686 [2024-12-06 13:37:29.219717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.686 [2024-12-06 13:37:29.219722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.219732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.686 qpair failed and we were unable to recover it. 00:29:42.686 [2024-12-06 13:37:29.229682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.686 [2024-12-06 13:37:29.229726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.686 [2024-12-06 13:37:29.229736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.686 [2024-12-06 13:37:29.229741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.686 [2024-12-06 13:37:29.229746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.229756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.686 qpair failed and we were unable to recover it. 00:29:42.686 [2024-12-06 13:37:29.239688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.686 [2024-12-06 13:37:29.239736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.686 [2024-12-06 13:37:29.239746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.686 [2024-12-06 13:37:29.239751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.686 [2024-12-06 13:37:29.239756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.239769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.686 qpair failed and we were unable to recover it. 00:29:42.686 [2024-12-06 13:37:29.249696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.686 [2024-12-06 13:37:29.249735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.686 [2024-12-06 13:37:29.249745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.686 [2024-12-06 13:37:29.249751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.686 [2024-12-06 13:37:29.249756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.249766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.686 qpair failed and we were unable to recover it. 00:29:42.686 [2024-12-06 13:37:29.259789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.686 [2024-12-06 13:37:29.259832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.686 [2024-12-06 13:37:29.259842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.686 [2024-12-06 13:37:29.259847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.686 [2024-12-06 13:37:29.259852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.259862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.686 qpair failed and we were unable to recover it. 00:29:42.686 [2024-12-06 13:37:29.269787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.686 [2024-12-06 13:37:29.269827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.686 [2024-12-06 13:37:29.269837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.686 [2024-12-06 13:37:29.269843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.686 [2024-12-06 13:37:29.269847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.269858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.686 qpair failed and we were unable to recover it. 00:29:42.686 [2024-12-06 13:37:29.279835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.686 [2024-12-06 13:37:29.279876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.686 [2024-12-06 13:37:29.279886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.686 [2024-12-06 13:37:29.279891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.686 [2024-12-06 13:37:29.279896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.279906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.686 qpair failed and we were unable to recover it. 00:29:42.686 [2024-12-06 13:37:29.289844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.686 [2024-12-06 13:37:29.289890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.686 [2024-12-06 13:37:29.289900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.686 [2024-12-06 13:37:29.289905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.686 [2024-12-06 13:37:29.289910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.686 [2024-12-06 13:37:29.289920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.687 qpair failed and we were unable to recover it. 00:29:42.687 [2024-12-06 13:37:29.299905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.687 [2024-12-06 13:37:29.299951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.687 [2024-12-06 13:37:29.299961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.687 [2024-12-06 13:37:29.299966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.687 [2024-12-06 13:37:29.299971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.687 [2024-12-06 13:37:29.299981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.687 qpair failed and we were unable to recover it. 00:29:42.687 [2024-12-06 13:37:29.309895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.687 [2024-12-06 13:37:29.309937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.687 [2024-12-06 13:37:29.309947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.687 [2024-12-06 13:37:29.309953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.687 [2024-12-06 13:37:29.309957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.687 [2024-12-06 13:37:29.309967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.687 qpair failed and we were unable to recover it. 00:29:42.687 [2024-12-06 13:37:29.319931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.687 [2024-12-06 13:37:29.320001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.687 [2024-12-06 13:37:29.320012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.687 [2024-12-06 13:37:29.320017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.687 [2024-12-06 13:37:29.320022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.687 [2024-12-06 13:37:29.320032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.687 qpair failed and we were unable to recover it. 00:29:42.687 [2024-12-06 13:37:29.329934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.687 [2024-12-06 13:37:29.329975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.687 [2024-12-06 13:37:29.329986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.687 [2024-12-06 13:37:29.329994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.687 [2024-12-06 13:37:29.329998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.687 [2024-12-06 13:37:29.330009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.687 qpair failed and we were unable to recover it. 00:29:42.687 [2024-12-06 13:37:29.339990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.687 [2024-12-06 13:37:29.340043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.687 [2024-12-06 13:37:29.340054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.687 [2024-12-06 13:37:29.340059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.687 [2024-12-06 13:37:29.340063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.687 [2024-12-06 13:37:29.340074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.687 qpair failed and we were unable to recover it. 00:29:42.950 [2024-12-06 13:37:29.350000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.950 [2024-12-06 13:37:29.350044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.950 [2024-12-06 13:37:29.350054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.950 [2024-12-06 13:37:29.350059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.950 [2024-12-06 13:37:29.350064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.950 [2024-12-06 13:37:29.350074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.950 qpair failed and we were unable to recover it. 00:29:42.950 [2024-12-06 13:37:29.360058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.950 [2024-12-06 13:37:29.360142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.950 [2024-12-06 13:37:29.360152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.950 [2024-12-06 13:37:29.360157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.950 [2024-12-06 13:37:29.360163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.950 [2024-12-06 13:37:29.360173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.950 qpair failed and we were unable to recover it. 00:29:42.950 [2024-12-06 13:37:29.370046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.950 [2024-12-06 13:37:29.370086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.950 [2024-12-06 13:37:29.370097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.950 [2024-12-06 13:37:29.370102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.950 [2024-12-06 13:37:29.370107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.950 [2024-12-06 13:37:29.370123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.950 qpair failed and we were unable to recover it. 00:29:42.950 [2024-12-06 13:37:29.380025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.950 [2024-12-06 13:37:29.380072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.950 [2024-12-06 13:37:29.380082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.950 [2024-12-06 13:37:29.380087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.950 [2024-12-06 13:37:29.380092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.950 [2024-12-06 13:37:29.380102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.950 qpair failed and we were unable to recover it. 00:29:42.950 [2024-12-06 13:37:29.390099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.950 [2024-12-06 13:37:29.390144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.950 [2024-12-06 13:37:29.390154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.950 [2024-12-06 13:37:29.390159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.950 [2024-12-06 13:37:29.390164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.950 [2024-12-06 13:37:29.390174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.950 qpair failed and we were unable to recover it. 00:29:42.950 [2024-12-06 13:37:29.400154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.950 [2024-12-06 13:37:29.400197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.950 [2024-12-06 13:37:29.400207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.950 [2024-12-06 13:37:29.400212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.950 [2024-12-06 13:37:29.400217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.950 [2024-12-06 13:37:29.400228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.950 qpair failed and we were unable to recover it. 00:29:42.950 [2024-12-06 13:37:29.410156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.950 [2024-12-06 13:37:29.410207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.950 [2024-12-06 13:37:29.410217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.950 [2024-12-06 13:37:29.410222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.950 [2024-12-06 13:37:29.410227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.950 [2024-12-06 13:37:29.410237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.950 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.420224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.951 [2024-12-06 13:37:29.420277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.951 [2024-12-06 13:37:29.420287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.951 [2024-12-06 13:37:29.420292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.951 [2024-12-06 13:37:29.420297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.951 [2024-12-06 13:37:29.420307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.951 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.430221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.951 [2024-12-06 13:37:29.430302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.951 [2024-12-06 13:37:29.430313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.951 [2024-12-06 13:37:29.430318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.951 [2024-12-06 13:37:29.430322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.951 [2024-12-06 13:37:29.430333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.951 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.440224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.951 [2024-12-06 13:37:29.440271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.951 [2024-12-06 13:37:29.440281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.951 [2024-12-06 13:37:29.440286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.951 [2024-12-06 13:37:29.440290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.951 [2024-12-06 13:37:29.440300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.951 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.450267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.951 [2024-12-06 13:37:29.450314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.951 [2024-12-06 13:37:29.450324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.951 [2024-12-06 13:37:29.450329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.951 [2024-12-06 13:37:29.450334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.951 [2024-12-06 13:37:29.450344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.951 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.460285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.951 [2024-12-06 13:37:29.460327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.951 [2024-12-06 13:37:29.460337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.951 [2024-12-06 13:37:29.460345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.951 [2024-12-06 13:37:29.460350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.951 [2024-12-06 13:37:29.460360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.951 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.470329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.951 [2024-12-06 13:37:29.470371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.951 [2024-12-06 13:37:29.470382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.951 [2024-12-06 13:37:29.470387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.951 [2024-12-06 13:37:29.470392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.951 [2024-12-06 13:37:29.470402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.951 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.480325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.951 [2024-12-06 13:37:29.480370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.951 [2024-12-06 13:37:29.480381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.951 [2024-12-06 13:37:29.480386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.951 [2024-12-06 13:37:29.480391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.951 [2024-12-06 13:37:29.480402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.951 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.490366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.951 [2024-12-06 13:37:29.490404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.951 [2024-12-06 13:37:29.490414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.951 [2024-12-06 13:37:29.490419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.951 [2024-12-06 13:37:29.490424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.951 [2024-12-06 13:37:29.490434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.951 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.500427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.951 [2024-12-06 13:37:29.500471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.951 [2024-12-06 13:37:29.500481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.951 [2024-12-06 13:37:29.500486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.951 [2024-12-06 13:37:29.500491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.951 [2024-12-06 13:37:29.500504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.951 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.510388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.951 [2024-12-06 13:37:29.510469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.951 [2024-12-06 13:37:29.510479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.951 [2024-12-06 13:37:29.510484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.951 [2024-12-06 13:37:29.510489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.951 [2024-12-06 13:37:29.510499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.951 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.520440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.951 [2024-12-06 13:37:29.520511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.951 [2024-12-06 13:37:29.520522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.951 [2024-12-06 13:37:29.520527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.951 [2024-12-06 13:37:29.520532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.951 [2024-12-06 13:37:29.520542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.951 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.530482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.951 [2024-12-06 13:37:29.530523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.951 [2024-12-06 13:37:29.530534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.951 [2024-12-06 13:37:29.530539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.951 [2024-12-06 13:37:29.530544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.951 [2024-12-06 13:37:29.530554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.951 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.540499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.951 [2024-12-06 13:37:29.540547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.951 [2024-12-06 13:37:29.540557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.951 [2024-12-06 13:37:29.540563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.951 [2024-12-06 13:37:29.540567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.951 [2024-12-06 13:37:29.540577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.951 qpair failed and we were unable to recover it. 00:29:42.951 [2024-12-06 13:37:29.550431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.952 [2024-12-06 13:37:29.550479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.952 [2024-12-06 13:37:29.550489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.952 [2024-12-06 13:37:29.550495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.952 [2024-12-06 13:37:29.550499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.952 [2024-12-06 13:37:29.550509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.952 qpair failed and we were unable to recover it. 00:29:42.952 [2024-12-06 13:37:29.560547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.952 [2024-12-06 13:37:29.560619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.952 [2024-12-06 13:37:29.560629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.952 [2024-12-06 13:37:29.560634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.952 [2024-12-06 13:37:29.560639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.952 [2024-12-06 13:37:29.560649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.952 qpair failed and we were unable to recover it. 00:29:42.952 [2024-12-06 13:37:29.570592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.952 [2024-12-06 13:37:29.570630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.952 [2024-12-06 13:37:29.570639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.952 [2024-12-06 13:37:29.570645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.952 [2024-12-06 13:37:29.570649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.952 [2024-12-06 13:37:29.570659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.952 qpair failed and we were unable to recover it. 00:29:42.952 [2024-12-06 13:37:29.580620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.952 [2024-12-06 13:37:29.580709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.952 [2024-12-06 13:37:29.580719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.952 [2024-12-06 13:37:29.580725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.952 [2024-12-06 13:37:29.580729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.952 [2024-12-06 13:37:29.580739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.952 qpair failed and we were unable to recover it. 00:29:42.952 [2024-12-06 13:37:29.590651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.952 [2024-12-06 13:37:29.590695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.952 [2024-12-06 13:37:29.590705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.952 [2024-12-06 13:37:29.590713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.952 [2024-12-06 13:37:29.590718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.952 [2024-12-06 13:37:29.590728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.952 qpair failed and we were unable to recover it. 00:29:42.952 [2024-12-06 13:37:29.600695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:42.952 [2024-12-06 13:37:29.600737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:42.952 [2024-12-06 13:37:29.600748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:42.952 [2024-12-06 13:37:29.600753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:42.952 [2024-12-06 13:37:29.600758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:42.952 [2024-12-06 13:37:29.600768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:42.952 qpair failed and we were unable to recover it. 00:29:43.216 [2024-12-06 13:37:29.610711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.216 [2024-12-06 13:37:29.610747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.216 [2024-12-06 13:37:29.610757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.216 [2024-12-06 13:37:29.610762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.216 [2024-12-06 13:37:29.610767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.216 [2024-12-06 13:37:29.610777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.216 qpair failed and we were unable to recover it. 00:29:43.216 [2024-12-06 13:37:29.620778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.216 [2024-12-06 13:37:29.620823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.216 [2024-12-06 13:37:29.620833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.216 [2024-12-06 13:37:29.620839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.216 [2024-12-06 13:37:29.620845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.216 [2024-12-06 13:37:29.620855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.216 qpair failed and we were unable to recover it. 00:29:43.216 [2024-12-06 13:37:29.630741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.216 [2024-12-06 13:37:29.630780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.216 [2024-12-06 13:37:29.630790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.216 [2024-12-06 13:37:29.630796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.216 [2024-12-06 13:37:29.630801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.216 [2024-12-06 13:37:29.630814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.216 qpair failed and we were unable to recover it. 00:29:43.216 [2024-12-06 13:37:29.640778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.216 [2024-12-06 13:37:29.640829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.216 [2024-12-06 13:37:29.640839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.216 [2024-12-06 13:37:29.640844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.216 [2024-12-06 13:37:29.640849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.216 [2024-12-06 13:37:29.640859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.216 qpair failed and we were unable to recover it. 00:29:43.216 [2024-12-06 13:37:29.650663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.216 [2024-12-06 13:37:29.650702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.216 [2024-12-06 13:37:29.650712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.216 [2024-12-06 13:37:29.650717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.216 [2024-12-06 13:37:29.650722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.216 [2024-12-06 13:37:29.650732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.216 qpair failed and we were unable to recover it. 00:29:43.216 [2024-12-06 13:37:29.660864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.216 [2024-12-06 13:37:29.660955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.216 [2024-12-06 13:37:29.660966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.216 [2024-12-06 13:37:29.660972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.216 [2024-12-06 13:37:29.660977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.216 [2024-12-06 13:37:29.660987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.216 qpair failed and we were unable to recover it. 00:29:43.216 [2024-12-06 13:37:29.670862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.216 [2024-12-06 13:37:29.670901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.216 [2024-12-06 13:37:29.670912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.216 [2024-12-06 13:37:29.670917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.216 [2024-12-06 13:37:29.670922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.216 [2024-12-06 13:37:29.670933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.216 qpair failed and we were unable to recover it. 00:29:43.216 [2024-12-06 13:37:29.680897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.216 [2024-12-06 13:37:29.680942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.216 [2024-12-06 13:37:29.680952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.216 [2024-12-06 13:37:29.680957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.217 [2024-12-06 13:37:29.680962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.217 [2024-12-06 13:37:29.680972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.217 qpair failed and we were unable to recover it. 00:29:43.217 [2024-12-06 13:37:29.690892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.217 [2024-12-06 13:37:29.690933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.217 [2024-12-06 13:37:29.690943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.217 [2024-12-06 13:37:29.690948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.217 [2024-12-06 13:37:29.690953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.217 [2024-12-06 13:37:29.690963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.217 qpair failed and we were unable to recover it. 00:29:43.217 [2024-12-06 13:37:29.700957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.217 [2024-12-06 13:37:29.701000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.217 [2024-12-06 13:37:29.701010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.217 [2024-12-06 13:37:29.701016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.217 [2024-12-06 13:37:29.701020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.217 [2024-12-06 13:37:29.701030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.217 qpair failed and we were unable to recover it. 00:29:43.217 [2024-12-06 13:37:29.710959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.217 [2024-12-06 13:37:29.711001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.217 [2024-12-06 13:37:29.711011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.217 [2024-12-06 13:37:29.711016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.217 [2024-12-06 13:37:29.711021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.217 [2024-12-06 13:37:29.711030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.217 qpair failed and we were unable to recover it. 00:29:43.217 [2024-12-06 13:37:29.720980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.217 [2024-12-06 13:37:29.721034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.217 [2024-12-06 13:37:29.721045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.217 [2024-12-06 13:37:29.721053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.217 [2024-12-06 13:37:29.721058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.217 [2024-12-06 13:37:29.721068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.217 qpair failed and we were unable to recover it. 00:29:43.217 [2024-12-06 13:37:29.730958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.217 [2024-12-06 13:37:29.730995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.217 [2024-12-06 13:37:29.731005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.217 [2024-12-06 13:37:29.731010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.217 [2024-12-06 13:37:29.731015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.217 [2024-12-06 13:37:29.731025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.217 qpair failed and we were unable to recover it. 00:29:43.217 [2024-12-06 13:37:29.741068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.217 [2024-12-06 13:37:29.741116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.217 [2024-12-06 13:37:29.741125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.217 [2024-12-06 13:37:29.741131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.217 [2024-12-06 13:37:29.741135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.217 [2024-12-06 13:37:29.741145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.217 qpair failed and we were unable to recover it. 00:29:43.217 [2024-12-06 13:37:29.751072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.217 [2024-12-06 13:37:29.751113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.217 [2024-12-06 13:37:29.751123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.217 [2024-12-06 13:37:29.751128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.217 [2024-12-06 13:37:29.751132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.217 [2024-12-06 13:37:29.751142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.217 qpair failed and we were unable to recover it. 00:29:43.217 [2024-12-06 13:37:29.761146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.217 [2024-12-06 13:37:29.761219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.217 [2024-12-06 13:37:29.761229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.217 [2024-12-06 13:37:29.761234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.217 [2024-12-06 13:37:29.761238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.217 [2024-12-06 13:37:29.761251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.217 qpair failed and we were unable to recover it. 00:29:43.217 [2024-12-06 13:37:29.771112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.217 [2024-12-06 13:37:29.771193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.217 [2024-12-06 13:37:29.771203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.217 [2024-12-06 13:37:29.771209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.217 [2024-12-06 13:37:29.771214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.217 [2024-12-06 13:37:29.771224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.217 qpair failed and we were unable to recover it. 00:29:43.217 [2024-12-06 13:37:29.781132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.217 [2024-12-06 13:37:29.781247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.217 [2024-12-06 13:37:29.781257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.217 [2024-12-06 13:37:29.781263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.217 [2024-12-06 13:37:29.781267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.217 [2024-12-06 13:37:29.781277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.217 qpair failed and we were unable to recover it. 00:29:43.217 [2024-12-06 13:37:29.791160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.217 [2024-12-06 13:37:29.791207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.217 [2024-12-06 13:37:29.791227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.217 [2024-12-06 13:37:29.791233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.217 [2024-12-06 13:37:29.791238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.217 [2024-12-06 13:37:29.791252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.217 qpair failed and we were unable to recover it. 00:29:43.217 [2024-12-06 13:37:29.801188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.217 [2024-12-06 13:37:29.801264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.217 [2024-12-06 13:37:29.801284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.217 [2024-12-06 13:37:29.801291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.217 [2024-12-06 13:37:29.801297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.217 [2024-12-06 13:37:29.801311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.217 qpair failed and we were unable to recover it. 00:29:43.217 [2024-12-06 13:37:29.811216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.217 [2024-12-06 13:37:29.811265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.218 [2024-12-06 13:37:29.811285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.218 [2024-12-06 13:37:29.811292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.218 [2024-12-06 13:37:29.811297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.218 [2024-12-06 13:37:29.811311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.218 qpair failed and we were unable to recover it. 00:29:43.218 [2024-12-06 13:37:29.821245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.218 [2024-12-06 13:37:29.821285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.218 [2024-12-06 13:37:29.821296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.218 [2024-12-06 13:37:29.821302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.218 [2024-12-06 13:37:29.821307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.218 [2024-12-06 13:37:29.821318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.218 qpair failed and we were unable to recover it. 00:29:43.218 [2024-12-06 13:37:29.831281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.218 [2024-12-06 13:37:29.831323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.218 [2024-12-06 13:37:29.831333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.218 [2024-12-06 13:37:29.831339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.218 [2024-12-06 13:37:29.831344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.218 [2024-12-06 13:37:29.831354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.218 qpair failed and we were unable to recover it. 00:29:43.218 [2024-12-06 13:37:29.841332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.218 [2024-12-06 13:37:29.841372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.218 [2024-12-06 13:37:29.841382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.218 [2024-12-06 13:37:29.841387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.218 [2024-12-06 13:37:29.841392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.218 [2024-12-06 13:37:29.841403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.218 qpair failed and we were unable to recover it. 00:29:43.218 [2024-12-06 13:37:29.851338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.218 [2024-12-06 13:37:29.851375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.218 [2024-12-06 13:37:29.851385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.218 [2024-12-06 13:37:29.851394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.218 [2024-12-06 13:37:29.851399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.218 [2024-12-06 13:37:29.851409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.218 qpair failed and we were unable to recover it. 00:29:43.218 [2024-12-06 13:37:29.861352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.218 [2024-12-06 13:37:29.861396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.218 [2024-12-06 13:37:29.861407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.218 [2024-12-06 13:37:29.861413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.218 [2024-12-06 13:37:29.861417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.218 [2024-12-06 13:37:29.861428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.218 qpair failed and we were unable to recover it. 00:29:43.480 [2024-12-06 13:37:29.871403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.480 [2024-12-06 13:37:29.871472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.480 [2024-12-06 13:37:29.871483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.480 [2024-12-06 13:37:29.871490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.480 [2024-12-06 13:37:29.871495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.480 [2024-12-06 13:37:29.871506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.480 qpair failed and we were unable to recover it. 00:29:43.480 [2024-12-06 13:37:29.881439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.481 [2024-12-06 13:37:29.881486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.481 [2024-12-06 13:37:29.881497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.481 [2024-12-06 13:37:29.881503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.481 [2024-12-06 13:37:29.881509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.481 [2024-12-06 13:37:29.881519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.481 qpair failed and we were unable to recover it. 00:29:43.481 [2024-12-06 13:37:29.891441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.481 [2024-12-06 13:37:29.891484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.481 [2024-12-06 13:37:29.891494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.481 [2024-12-06 13:37:29.891500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.481 [2024-12-06 13:37:29.891505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.481 [2024-12-06 13:37:29.891518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.481 qpair failed and we were unable to recover it. 00:29:43.481 [2024-12-06 13:37:29.901476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.481 [2024-12-06 13:37:29.901549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.481 [2024-12-06 13:37:29.901559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.481 [2024-12-06 13:37:29.901564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.481 [2024-12-06 13:37:29.901569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.481 [2024-12-06 13:37:29.901579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.481 qpair failed and we were unable to recover it. 00:29:43.481 [2024-12-06 13:37:29.911502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.481 [2024-12-06 13:37:29.911551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.481 [2024-12-06 13:37:29.911561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.481 [2024-12-06 13:37:29.911567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.481 [2024-12-06 13:37:29.911572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.481 [2024-12-06 13:37:29.911582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.481 qpair failed and we were unable to recover it. 00:29:43.481 [2024-12-06 13:37:29.921549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.481 [2024-12-06 13:37:29.921597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.481 [2024-12-06 13:37:29.921607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.481 [2024-12-06 13:37:29.921613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.481 [2024-12-06 13:37:29.921617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.481 [2024-12-06 13:37:29.921627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.481 qpair failed and we were unable to recover it. 00:29:43.481 [2024-12-06 13:37:29.931487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.481 [2024-12-06 13:37:29.931555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.481 [2024-12-06 13:37:29.931567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.481 [2024-12-06 13:37:29.931572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.481 [2024-12-06 13:37:29.931577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.481 [2024-12-06 13:37:29.931588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.481 qpair failed and we were unable to recover it. 00:29:43.481 [2024-12-06 13:37:29.941545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.481 [2024-12-06 13:37:29.941587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.481 [2024-12-06 13:37:29.941598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.481 [2024-12-06 13:37:29.941603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.481 [2024-12-06 13:37:29.941608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.481 [2024-12-06 13:37:29.941618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.481 qpair failed and we were unable to recover it. 00:29:43.481 [2024-12-06 13:37:29.951606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.481 [2024-12-06 13:37:29.951646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.481 [2024-12-06 13:37:29.951656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.481 [2024-12-06 13:37:29.951661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.481 [2024-12-06 13:37:29.951666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.481 [2024-12-06 13:37:29.951677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.481 qpair failed and we were unable to recover it. 00:29:43.481 [2024-12-06 13:37:29.961645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.481 [2024-12-06 13:37:29.961732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.481 [2024-12-06 13:37:29.961742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.481 [2024-12-06 13:37:29.961748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.481 [2024-12-06 13:37:29.961752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.481 [2024-12-06 13:37:29.961763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.481 qpair failed and we were unable to recover it. 00:29:43.481 [2024-12-06 13:37:29.971656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.481 [2024-12-06 13:37:29.971698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.481 [2024-12-06 13:37:29.971708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.481 [2024-12-06 13:37:29.971713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.481 [2024-12-06 13:37:29.971718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.481 [2024-12-06 13:37:29.971728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.481 qpair failed and we were unable to recover it. 00:29:43.481 [2024-12-06 13:37:29.981554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.481 [2024-12-06 13:37:29.981591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.481 [2024-12-06 13:37:29.981601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.481 [2024-12-06 13:37:29.981610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.481 [2024-12-06 13:37:29.981614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.481 [2024-12-06 13:37:29.981625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.481 qpair failed and we were unable to recover it. 00:29:43.481 [2024-12-06 13:37:29.991695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.481 [2024-12-06 13:37:29.991735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.481 [2024-12-06 13:37:29.991745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.481 [2024-12-06 13:37:29.991750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.481 [2024-12-06 13:37:29.991755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.481 [2024-12-06 13:37:29.991766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.481 qpair failed and we were unable to recover it. 00:29:43.481 [2024-12-06 13:37:30.001767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.481 [2024-12-06 13:37:30.001807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.481 [2024-12-06 13:37:30.001817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.481 [2024-12-06 13:37:30.002318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.481 [2024-12-06 13:37:30.002325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.481 [2024-12-06 13:37:30.002339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.481 qpair failed and we were unable to recover it. 00:29:43.481 [2024-12-06 13:37:30.011800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.482 [2024-12-06 13:37:30.011846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.482 [2024-12-06 13:37:30.011857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.482 [2024-12-06 13:37:30.011862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.482 [2024-12-06 13:37:30.011867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.482 [2024-12-06 13:37:30.011877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.482 qpair failed and we were unable to recover it. 00:29:43.482 [2024-12-06 13:37:30.021780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.482 [2024-12-06 13:37:30.021823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.482 [2024-12-06 13:37:30.021833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.482 [2024-12-06 13:37:30.021839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.482 [2024-12-06 13:37:30.021843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.482 [2024-12-06 13:37:30.021857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.482 qpair failed and we were unable to recover it. 00:29:43.482 [2024-12-06 13:37:30.031833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.482 [2024-12-06 13:37:30.031893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.482 [2024-12-06 13:37:30.031903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.482 [2024-12-06 13:37:30.031908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.482 [2024-12-06 13:37:30.031913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.482 [2024-12-06 13:37:30.031924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.482 qpair failed and we were unable to recover it. 00:29:43.482 [2024-12-06 13:37:30.041946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.482 [2024-12-06 13:37:30.041997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.482 [2024-12-06 13:37:30.042007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.482 [2024-12-06 13:37:30.042012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.482 [2024-12-06 13:37:30.042017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.482 [2024-12-06 13:37:30.042027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.482 qpair failed and we were unable to recover it. 00:29:43.482 [2024-12-06 13:37:30.051956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.482 [2024-12-06 13:37:30.052006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.482 [2024-12-06 13:37:30.052016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.482 [2024-12-06 13:37:30.052021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.482 [2024-12-06 13:37:30.052026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.482 [2024-12-06 13:37:30.052036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.482 qpair failed and we were unable to recover it. 00:29:43.482 [2024-12-06 13:37:30.061940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.482 [2024-12-06 13:37:30.061989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.482 [2024-12-06 13:37:30.061999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.482 [2024-12-06 13:37:30.062005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.482 [2024-12-06 13:37:30.062010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.482 [2024-12-06 13:37:30.062020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.482 qpair failed and we were unable to recover it. 00:29:43.482 [2024-12-06 13:37:30.071949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.482 [2024-12-06 13:37:30.071999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.482 [2024-12-06 13:37:30.072010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.482 [2024-12-06 13:37:30.072015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.482 [2024-12-06 13:37:30.072020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.482 [2024-12-06 13:37:30.072030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.482 qpair failed and we were unable to recover it. 00:29:43.482 [2024-12-06 13:37:30.081972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.482 [2024-12-06 13:37:30.082049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.482 [2024-12-06 13:37:30.082060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.482 [2024-12-06 13:37:30.082065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.482 [2024-12-06 13:37:30.082070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.482 [2024-12-06 13:37:30.082080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.482 qpair failed and we were unable to recover it. 00:29:43.482 [2024-12-06 13:37:30.091990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.482 [2024-12-06 13:37:30.092032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.482 [2024-12-06 13:37:30.092042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.482 [2024-12-06 13:37:30.092048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.482 [2024-12-06 13:37:30.092052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.482 [2024-12-06 13:37:30.092062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.482 qpair failed and we were unable to recover it. 00:29:43.482 [2024-12-06 13:37:30.101994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.482 [2024-12-06 13:37:30.102031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.482 [2024-12-06 13:37:30.102041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.482 [2024-12-06 13:37:30.102046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.482 [2024-12-06 13:37:30.102051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.482 [2024-12-06 13:37:30.102061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.482 qpair failed and we were unable to recover it. 00:29:43.482 [2024-12-06 13:37:30.112019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.482 [2024-12-06 13:37:30.112066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.482 [2024-12-06 13:37:30.112076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.482 [2024-12-06 13:37:30.112083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.482 [2024-12-06 13:37:30.112088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.482 [2024-12-06 13:37:30.112098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.482 qpair failed and we were unable to recover it. 00:29:43.482 [2024-12-06 13:37:30.121991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.482 [2024-12-06 13:37:30.122041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.482 [2024-12-06 13:37:30.122051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.482 [2024-12-06 13:37:30.122057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.482 [2024-12-06 13:37:30.122061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.482 [2024-12-06 13:37:30.122072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.482 qpair failed and we were unable to recover it. 00:29:43.482 [2024-12-06 13:37:30.132092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.482 [2024-12-06 13:37:30.132136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.482 [2024-12-06 13:37:30.132146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.482 [2024-12-06 13:37:30.132151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.482 [2024-12-06 13:37:30.132156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.482 [2024-12-06 13:37:30.132166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.482 qpair failed and we were unable to recover it. 00:29:43.744 [2024-12-06 13:37:30.142120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.744 [2024-12-06 13:37:30.142163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.744 [2024-12-06 13:37:30.142183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.744 [2024-12-06 13:37:30.142190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.744 [2024-12-06 13:37:30.142195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.744 [2024-12-06 13:37:30.142209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-12-06 13:37:30.152147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.744 [2024-12-06 13:37:30.152201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.744 [2024-12-06 13:37:30.152213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.744 [2024-12-06 13:37:30.152219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.744 [2024-12-06 13:37:30.152224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.744 [2024-12-06 13:37:30.152240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-12-06 13:37:30.162228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.744 [2024-12-06 13:37:30.162272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.744 [2024-12-06 13:37:30.162292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.744 [2024-12-06 13:37:30.162298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.745 [2024-12-06 13:37:30.162304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.745 [2024-12-06 13:37:30.162319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-12-06 13:37:30.172196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.745 [2024-12-06 13:37:30.172245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.745 [2024-12-06 13:37:30.172264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.745 [2024-12-06 13:37:30.172271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.745 [2024-12-06 13:37:30.172276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.745 [2024-12-06 13:37:30.172290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-12-06 13:37:30.182198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.745 [2024-12-06 13:37:30.182243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.745 [2024-12-06 13:37:30.182263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.745 [2024-12-06 13:37:30.182269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.745 [2024-12-06 13:37:30.182274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.745 [2024-12-06 13:37:30.182288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-12-06 13:37:30.192261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.745 [2024-12-06 13:37:30.192317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.745 [2024-12-06 13:37:30.192330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.745 [2024-12-06 13:37:30.192335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.745 [2024-12-06 13:37:30.192340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.745 [2024-12-06 13:37:30.192352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-12-06 13:37:30.202148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.745 [2024-12-06 13:37:30.202218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.745 [2024-12-06 13:37:30.202229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.745 [2024-12-06 13:37:30.202234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.745 [2024-12-06 13:37:30.202239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.745 [2024-12-06 13:37:30.202250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-12-06 13:37:30.212286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.745 [2024-12-06 13:37:30.212322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.745 [2024-12-06 13:37:30.212332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.745 [2024-12-06 13:37:30.212338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.745 [2024-12-06 13:37:30.212343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.745 [2024-12-06 13:37:30.212353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-12-06 13:37:30.222305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.745 [2024-12-06 13:37:30.222344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.745 [2024-12-06 13:37:30.222354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.745 [2024-12-06 13:37:30.222360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.745 [2024-12-06 13:37:30.222365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.745 [2024-12-06 13:37:30.222375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-12-06 13:37:30.232220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.745 [2024-12-06 13:37:30.232263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.745 [2024-12-06 13:37:30.232273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.745 [2024-12-06 13:37:30.232278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.745 [2024-12-06 13:37:30.232283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.745 [2024-12-06 13:37:30.232294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-12-06 13:37:30.242404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.745 [2024-12-06 13:37:30.242447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.745 [2024-12-06 13:37:30.242466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.745 [2024-12-06 13:37:30.242472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.745 [2024-12-06 13:37:30.242476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.745 [2024-12-06 13:37:30.242487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-12-06 13:37:30.252407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.745 [2024-12-06 13:37:30.252450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.745 [2024-12-06 13:37:30.252463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.745 [2024-12-06 13:37:30.252469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.745 [2024-12-06 13:37:30.252474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.745 [2024-12-06 13:37:30.252484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-12-06 13:37:30.262297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.745 [2024-12-06 13:37:30.262338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.745 [2024-12-06 13:37:30.262348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.745 [2024-12-06 13:37:30.262353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.745 [2024-12-06 13:37:30.262358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.745 [2024-12-06 13:37:30.262368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-12-06 13:37:30.272483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.745 [2024-12-06 13:37:30.272529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.745 [2024-12-06 13:37:30.272539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.745 [2024-12-06 13:37:30.272544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.745 [2024-12-06 13:37:30.272549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.745 [2024-12-06 13:37:30.272559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-12-06 13:37:30.282491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.745 [2024-12-06 13:37:30.282538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.745 [2024-12-06 13:37:30.282548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.745 [2024-12-06 13:37:30.282554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.745 [2024-12-06 13:37:30.282558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.745 [2024-12-06 13:37:30.282571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-12-06 13:37:30.292483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.745 [2024-12-06 13:37:30.292521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.745 [2024-12-06 13:37:30.292531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.745 [2024-12-06 13:37:30.292536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.746 [2024-12-06 13:37:30.292541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.746 [2024-12-06 13:37:30.292551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-12-06 13:37:30.302513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.746 [2024-12-06 13:37:30.302556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.746 [2024-12-06 13:37:30.302566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.746 [2024-12-06 13:37:30.302571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.746 [2024-12-06 13:37:30.302576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.746 [2024-12-06 13:37:30.302586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-12-06 13:37:30.312568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.746 [2024-12-06 13:37:30.312611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.746 [2024-12-06 13:37:30.312621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.746 [2024-12-06 13:37:30.312626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.746 [2024-12-06 13:37:30.312631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.746 [2024-12-06 13:37:30.312641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-12-06 13:37:30.322594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.746 [2024-12-06 13:37:30.322646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.746 [2024-12-06 13:37:30.322656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.746 [2024-12-06 13:37:30.322661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.746 [2024-12-06 13:37:30.322666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.746 [2024-12-06 13:37:30.322676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-12-06 13:37:30.332531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.746 [2024-12-06 13:37:30.332577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.746 [2024-12-06 13:37:30.332589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.746 [2024-12-06 13:37:30.332595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.746 [2024-12-06 13:37:30.332599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.746 [2024-12-06 13:37:30.332610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-12-06 13:37:30.342572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.746 [2024-12-06 13:37:30.342614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.746 [2024-12-06 13:37:30.342624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.746 [2024-12-06 13:37:30.342630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.746 [2024-12-06 13:37:30.342634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.746 [2024-12-06 13:37:30.342645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-12-06 13:37:30.352692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.746 [2024-12-06 13:37:30.352737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.746 [2024-12-06 13:37:30.352747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.746 [2024-12-06 13:37:30.352752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.746 [2024-12-06 13:37:30.352757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.746 [2024-12-06 13:37:30.352767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-12-06 13:37:30.362700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.746 [2024-12-06 13:37:30.362747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.746 [2024-12-06 13:37:30.362757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.746 [2024-12-06 13:37:30.362762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.746 [2024-12-06 13:37:30.362767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.746 [2024-12-06 13:37:30.362777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-12-06 13:37:30.372735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.746 [2024-12-06 13:37:30.372777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.746 [2024-12-06 13:37:30.372792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.746 [2024-12-06 13:37:30.372797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.746 [2024-12-06 13:37:30.372802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.746 [2024-12-06 13:37:30.372812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-12-06 13:37:30.382627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.746 [2024-12-06 13:37:30.382670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.746 [2024-12-06 13:37:30.382681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.746 [2024-12-06 13:37:30.382686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.746 [2024-12-06 13:37:30.382691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.746 [2024-12-06 13:37:30.382702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-12-06 13:37:30.392773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.746 [2024-12-06 13:37:30.392815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.746 [2024-12-06 13:37:30.392826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.746 [2024-12-06 13:37:30.392831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.746 [2024-12-06 13:37:30.392835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:43.746 [2024-12-06 13:37:30.392846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.746 qpair failed and we were unable to recover it. 00:29:44.009 [2024-12-06 13:37:30.402834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.009 [2024-12-06 13:37:30.402876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.009 [2024-12-06 13:37:30.402886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.009 [2024-12-06 13:37:30.402892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.009 [2024-12-06 13:37:30.402896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.009 [2024-12-06 13:37:30.402907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.009 qpair failed and we were unable to recover it. 00:29:44.009 [2024-12-06 13:37:30.412852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.009 [2024-12-06 13:37:30.412895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.009 [2024-12-06 13:37:30.412905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.009 [2024-12-06 13:37:30.412910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.009 [2024-12-06 13:37:30.412915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.009 [2024-12-06 13:37:30.412928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.009 qpair failed and we were unable to recover it. 00:29:44.009 [2024-12-06 13:37:30.422865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.009 [2024-12-06 13:37:30.422908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.009 [2024-12-06 13:37:30.422918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.009 [2024-12-06 13:37:30.422924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.009 [2024-12-06 13:37:30.422928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.009 [2024-12-06 13:37:30.422938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.009 qpair failed and we were unable to recover it. 00:29:44.009 [2024-12-06 13:37:30.432892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.009 [2024-12-06 13:37:30.432938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.009 [2024-12-06 13:37:30.432948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.009 [2024-12-06 13:37:30.432953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.009 [2024-12-06 13:37:30.432958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.009 [2024-12-06 13:37:30.432968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.009 qpair failed and we were unable to recover it. 00:29:44.009 [2024-12-06 13:37:30.442816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.009 [2024-12-06 13:37:30.442863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.009 [2024-12-06 13:37:30.442874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.009 [2024-12-06 13:37:30.442879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.009 [2024-12-06 13:37:30.442884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.009 [2024-12-06 13:37:30.442895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.009 qpair failed and we were unable to recover it. 00:29:44.009 [2024-12-06 13:37:30.452937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.009 [2024-12-06 13:37:30.452987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.009 [2024-12-06 13:37:30.452997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.009 [2024-12-06 13:37:30.453003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.009 [2024-12-06 13:37:30.453007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.009 [2024-12-06 13:37:30.453017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.009 qpair failed and we were unable to recover it. 00:29:44.009 [2024-12-06 13:37:30.462952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.009 [2024-12-06 13:37:30.462991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.009 [2024-12-06 13:37:30.463001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.009 [2024-12-06 13:37:30.463007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.009 [2024-12-06 13:37:30.463011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.009 [2024-12-06 13:37:30.463021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.009 qpair failed and we were unable to recover it. 00:29:44.009 [2024-12-06 13:37:30.472990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.009 [2024-12-06 13:37:30.473030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.009 [2024-12-06 13:37:30.473040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.009 [2024-12-06 13:37:30.473045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.009 [2024-12-06 13:37:30.473049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.009 [2024-12-06 13:37:30.473059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.009 qpair failed and we were unable to recover it. 00:29:44.009 [2024-12-06 13:37:30.483027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.009 [2024-12-06 13:37:30.483069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.009 [2024-12-06 13:37:30.483079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.009 [2024-12-06 13:37:30.483084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.009 [2024-12-06 13:37:30.483089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.009 [2024-12-06 13:37:30.483099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.009 qpair failed and we were unable to recover it. 00:29:44.009 [2024-12-06 13:37:30.493013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.009 [2024-12-06 13:37:30.493057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.009 [2024-12-06 13:37:30.493068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.010 [2024-12-06 13:37:30.493073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.010 [2024-12-06 13:37:30.493078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.010 [2024-12-06 13:37:30.493088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.010 qpair failed and we were unable to recover it. 00:29:44.010 [2024-12-06 13:37:30.503061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.010 [2024-12-06 13:37:30.503112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.010 [2024-12-06 13:37:30.503125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.010 [2024-12-06 13:37:30.503131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.010 [2024-12-06 13:37:30.503135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.010 [2024-12-06 13:37:30.503146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.010 qpair failed and we were unable to recover it. 00:29:44.010 [2024-12-06 13:37:30.513084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.010 [2024-12-06 13:37:30.513125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.010 [2024-12-06 13:37:30.513135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.010 [2024-12-06 13:37:30.513141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.010 [2024-12-06 13:37:30.513145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.010 [2024-12-06 13:37:30.513155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.010 qpair failed and we were unable to recover it. 00:29:44.010 [2024-12-06 13:37:30.523051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.010 [2024-12-06 13:37:30.523093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.010 [2024-12-06 13:37:30.523103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.010 [2024-12-06 13:37:30.523108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.010 [2024-12-06 13:37:30.523113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.010 [2024-12-06 13:37:30.523123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.010 qpair failed and we were unable to recover it. 00:29:44.010 [2024-12-06 13:37:30.533114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.010 [2024-12-06 13:37:30.533151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.010 [2024-12-06 13:37:30.533161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.010 [2024-12-06 13:37:30.533166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.010 [2024-12-06 13:37:30.533171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.010 [2024-12-06 13:37:30.533181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.010 qpair failed and we were unable to recover it. 00:29:44.010 [2024-12-06 13:37:30.543161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.010 [2024-12-06 13:37:30.543195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.010 [2024-12-06 13:37:30.543209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.010 [2024-12-06 13:37:30.543215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.010 [2024-12-06 13:37:30.543219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.010 [2024-12-06 13:37:30.543233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.010 qpair failed and we were unable to recover it. 00:29:44.010 [2024-12-06 13:37:30.553225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.010 [2024-12-06 13:37:30.553268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.010 [2024-12-06 13:37:30.553279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.010 [2024-12-06 13:37:30.553284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.010 [2024-12-06 13:37:30.553289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.010 [2024-12-06 13:37:30.553299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.010 qpair failed and we were unable to recover it. 00:29:44.010 [2024-12-06 13:37:30.563197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.010 [2024-12-06 13:37:30.563263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.010 [2024-12-06 13:37:30.563282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.010 [2024-12-06 13:37:30.563289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.010 [2024-12-06 13:37:30.563294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.010 [2024-12-06 13:37:30.563308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.010 qpair failed and we were unable to recover it. 00:29:44.010 [2024-12-06 13:37:30.573251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.010 [2024-12-06 13:37:30.573292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.010 [2024-12-06 13:37:30.573311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.010 [2024-12-06 13:37:30.573318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.010 [2024-12-06 13:37:30.573323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.010 [2024-12-06 13:37:30.573337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.010 qpair failed and we were unable to recover it. 00:29:44.010 [2024-12-06 13:37:30.583290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.010 [2024-12-06 13:37:30.583330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.010 [2024-12-06 13:37:30.583342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.010 [2024-12-06 13:37:30.583348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.010 [2024-12-06 13:37:30.583353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.010 [2024-12-06 13:37:30.583364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.010 qpair failed and we were unable to recover it. 00:29:44.010 [2024-12-06 13:37:30.593289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.010 [2024-12-06 13:37:30.593333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.010 [2024-12-06 13:37:30.593343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.010 [2024-12-06 13:37:30.593349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.010 [2024-12-06 13:37:30.593353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.010 [2024-12-06 13:37:30.593364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.010 qpair failed and we were unable to recover it. 00:29:44.010 [2024-12-06 13:37:30.603320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.010 [2024-12-06 13:37:30.603365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.010 [2024-12-06 13:37:30.603375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.010 [2024-12-06 13:37:30.603380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.010 [2024-12-06 13:37:30.603385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.010 [2024-12-06 13:37:30.603395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.010 qpair failed and we were unable to recover it. 00:29:44.010 [2024-12-06 13:37:30.613257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.010 [2024-12-06 13:37:30.613294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.010 [2024-12-06 13:37:30.613304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.010 [2024-12-06 13:37:30.613309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.010 [2024-12-06 13:37:30.613314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.010 [2024-12-06 13:37:30.613324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.010 qpair failed and we were unable to recover it. 00:29:44.010 [2024-12-06 13:37:30.623387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.010 [2024-12-06 13:37:30.623432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.010 [2024-12-06 13:37:30.623443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.011 [2024-12-06 13:37:30.623448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.011 [2024-12-06 13:37:30.623453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.011 [2024-12-06 13:37:30.623468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.011 qpair failed and we were unable to recover it. 00:29:44.011 [2024-12-06 13:37:30.633430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.011 [2024-12-06 13:37:30.633479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.011 [2024-12-06 13:37:30.633493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.011 [2024-12-06 13:37:30.633499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.011 [2024-12-06 13:37:30.633504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.011 [2024-12-06 13:37:30.633514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.011 qpair failed and we were unable to recover it. 00:29:44.011 [2024-12-06 13:37:30.643463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.011 [2024-12-06 13:37:30.643507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.011 [2024-12-06 13:37:30.643518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.011 [2024-12-06 13:37:30.643523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.011 [2024-12-06 13:37:30.643528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.011 [2024-12-06 13:37:30.643538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.011 qpair failed and we were unable to recover it. 00:29:44.011 [2024-12-06 13:37:30.653486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.011 [2024-12-06 13:37:30.653532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.011 [2024-12-06 13:37:30.653542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.011 [2024-12-06 13:37:30.653547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.011 [2024-12-06 13:37:30.653553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.011 [2024-12-06 13:37:30.653563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.011 qpair failed and we were unable to recover it. 00:29:44.011 [2024-12-06 13:37:30.663525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.011 [2024-12-06 13:37:30.663563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.011 [2024-12-06 13:37:30.663572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.011 [2024-12-06 13:37:30.663578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.011 [2024-12-06 13:37:30.663582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.011 [2024-12-06 13:37:30.663592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.011 qpair failed and we were unable to recover it. 00:29:44.274 [2024-12-06 13:37:30.673537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.274 [2024-12-06 13:37:30.673582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.274 [2024-12-06 13:37:30.673592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.274 [2024-12-06 13:37:30.673597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.274 [2024-12-06 13:37:30.673605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.274 [2024-12-06 13:37:30.673615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.274 qpair failed and we were unable to recover it. 00:29:44.274 [2024-12-06 13:37:30.683603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.274 [2024-12-06 13:37:30.683646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.274 [2024-12-06 13:37:30.683656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.274 [2024-12-06 13:37:30.683661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.274 [2024-12-06 13:37:30.683666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.274 [2024-12-06 13:37:30.683676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.274 qpair failed and we were unable to recover it. 00:29:44.274 [2024-12-06 13:37:30.693474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.274 [2024-12-06 13:37:30.693521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.274 [2024-12-06 13:37:30.693532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.274 [2024-12-06 13:37:30.693538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.274 [2024-12-06 13:37:30.693542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.274 [2024-12-06 13:37:30.693553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.274 qpair failed and we were unable to recover it. 00:29:44.274 [2024-12-06 13:37:30.703651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.274 [2024-12-06 13:37:30.703689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.274 [2024-12-06 13:37:30.703700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.274 [2024-12-06 13:37:30.703705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.274 [2024-12-06 13:37:30.703710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.274 [2024-12-06 13:37:30.703720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.274 qpair failed and we were unable to recover it. 00:29:44.274 [2024-12-06 13:37:30.713645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.274 [2024-12-06 13:37:30.713691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.274 [2024-12-06 13:37:30.713700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.274 [2024-12-06 13:37:30.713706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.274 [2024-12-06 13:37:30.713710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.274 [2024-12-06 13:37:30.713720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.274 qpair failed and we were unable to recover it. 00:29:44.274 [2024-12-06 13:37:30.723685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.274 [2024-12-06 13:37:30.723725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.274 [2024-12-06 13:37:30.723736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.274 [2024-12-06 13:37:30.723741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.274 [2024-12-06 13:37:30.723746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.274 [2024-12-06 13:37:30.723756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.274 qpair failed and we were unable to recover it. 00:29:44.274 [2024-12-06 13:37:30.733697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.274 [2024-12-06 13:37:30.733740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.274 [2024-12-06 13:37:30.733750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.274 [2024-12-06 13:37:30.733756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.274 [2024-12-06 13:37:30.733760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.274 [2024-12-06 13:37:30.733770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.274 qpair failed and we were unable to recover it. 00:29:44.274 [2024-12-06 13:37:30.743752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.274 [2024-12-06 13:37:30.743794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.274 [2024-12-06 13:37:30.743804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.274 [2024-12-06 13:37:30.743809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.274 [2024-12-06 13:37:30.743814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.274 [2024-12-06 13:37:30.743824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.274 qpair failed and we were unable to recover it. 00:29:44.274 [2024-12-06 13:37:30.753691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.274 [2024-12-06 13:37:30.753734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.274 [2024-12-06 13:37:30.753744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.274 [2024-12-06 13:37:30.753749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.274 [2024-12-06 13:37:30.753754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.274 [2024-12-06 13:37:30.753764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.274 qpair failed and we were unable to recover it. 00:29:44.274 [2024-12-06 13:37:30.763804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.274 [2024-12-06 13:37:30.763849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.274 [2024-12-06 13:37:30.763861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.274 [2024-12-06 13:37:30.763866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.274 [2024-12-06 13:37:30.763871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.274 [2024-12-06 13:37:30.763881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.274 qpair failed and we were unable to recover it. 00:29:44.274 [2024-12-06 13:37:30.773814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.274 [2024-12-06 13:37:30.773852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.274 [2024-12-06 13:37:30.773862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.274 [2024-12-06 13:37:30.773867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.274 [2024-12-06 13:37:30.773872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.274 [2024-12-06 13:37:30.773882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.274 qpair failed and we were unable to recover it. 00:29:44.274 [2024-12-06 13:37:30.783799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.274 [2024-12-06 13:37:30.783838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.274 [2024-12-06 13:37:30.783847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.275 [2024-12-06 13:37:30.783853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.275 [2024-12-06 13:37:30.783857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.275 [2024-12-06 13:37:30.783868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.275 qpair failed and we were unable to recover it. 00:29:44.275 [2024-12-06 13:37:30.793729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.275 [2024-12-06 13:37:30.793771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.275 [2024-12-06 13:37:30.793781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.275 [2024-12-06 13:37:30.793786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.275 [2024-12-06 13:37:30.793791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.275 [2024-12-06 13:37:30.793801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.275 qpair failed and we were unable to recover it. 00:29:44.275 [2024-12-06 13:37:30.803877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.275 [2024-12-06 13:37:30.803921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.275 [2024-12-06 13:37:30.803931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.275 [2024-12-06 13:37:30.803936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.275 [2024-12-06 13:37:30.803944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.275 [2024-12-06 13:37:30.803954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.275 qpair failed and we were unable to recover it. 00:29:44.275 [2024-12-06 13:37:30.813899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.275 [2024-12-06 13:37:30.813944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.275 [2024-12-06 13:37:30.813955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.275 [2024-12-06 13:37:30.813961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.275 [2024-12-06 13:37:30.813965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.275 [2024-12-06 13:37:30.813975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.275 qpair failed and we were unable to recover it. 00:29:44.275 [2024-12-06 13:37:30.823927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.275 [2024-12-06 13:37:30.823964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.275 [2024-12-06 13:37:30.823974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.275 [2024-12-06 13:37:30.823980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.275 [2024-12-06 13:37:30.823984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.275 [2024-12-06 13:37:30.823994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.275 qpair failed and we were unable to recover it. 00:29:44.275 [2024-12-06 13:37:30.833963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.275 [2024-12-06 13:37:30.834038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.275 [2024-12-06 13:37:30.834048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.275 [2024-12-06 13:37:30.834053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.275 [2024-12-06 13:37:30.834057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.275 [2024-12-06 13:37:30.834068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.275 qpair failed and we were unable to recover it. 00:29:44.275 [2024-12-06 13:37:30.843981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.275 [2024-12-06 13:37:30.844070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.275 [2024-12-06 13:37:30.844080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.275 [2024-12-06 13:37:30.844086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.275 [2024-12-06 13:37:30.844090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.275 [2024-12-06 13:37:30.844100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.275 qpair failed and we were unable to recover it. 00:29:44.275 [2024-12-06 13:37:30.853980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.275 [2024-12-06 13:37:30.854018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.275 [2024-12-06 13:37:30.854028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.275 [2024-12-06 13:37:30.854034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.275 [2024-12-06 13:37:30.854038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.275 [2024-12-06 13:37:30.854048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.275 qpair failed and we were unable to recover it. 00:29:44.275 [2024-12-06 13:37:30.863960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.275 [2024-12-06 13:37:30.864047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.275 [2024-12-06 13:37:30.864058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.275 [2024-12-06 13:37:30.864064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.275 [2024-12-06 13:37:30.864068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.275 [2024-12-06 13:37:30.864079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.275 qpair failed and we were unable to recover it. 00:29:44.275 [2024-12-06 13:37:30.874073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.275 [2024-12-06 13:37:30.874117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.275 [2024-12-06 13:37:30.874128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.275 [2024-12-06 13:37:30.874133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.275 [2024-12-06 13:37:30.874138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.275 [2024-12-06 13:37:30.874148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.275 qpair failed and we were unable to recover it. 00:29:44.275 [2024-12-06 13:37:30.884045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.275 [2024-12-06 13:37:30.884087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.275 [2024-12-06 13:37:30.884098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.275 [2024-12-06 13:37:30.884103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.275 [2024-12-06 13:37:30.884107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.275 [2024-12-06 13:37:30.884117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.275 qpair failed and we were unable to recover it. 00:29:44.275 [2024-12-06 13:37:30.894120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.275 [2024-12-06 13:37:30.894159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.275 [2024-12-06 13:37:30.894174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.275 [2024-12-06 13:37:30.894180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.275 [2024-12-06 13:37:30.894184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.275 [2024-12-06 13:37:30.894195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.275 qpair failed and we were unable to recover it. 00:29:44.275 [2024-12-06 13:37:30.904140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.275 [2024-12-06 13:37:30.904188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.275 [2024-12-06 13:37:30.904207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.275 [2024-12-06 13:37:30.904214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.275 [2024-12-06 13:37:30.904219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.275 [2024-12-06 13:37:30.904233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.275 qpair failed and we were unable to recover it. 00:29:44.275 [2024-12-06 13:37:30.914196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.275 [2024-12-06 13:37:30.914270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.275 [2024-12-06 13:37:30.914289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.276 [2024-12-06 13:37:30.914296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.276 [2024-12-06 13:37:30.914302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.276 [2024-12-06 13:37:30.914315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.276 qpair failed and we were unable to recover it. 00:29:44.276 [2024-12-06 13:37:30.924210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.276 [2024-12-06 13:37:30.924255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.276 [2024-12-06 13:37:30.924275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.276 [2024-12-06 13:37:30.924282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.276 [2024-12-06 13:37:30.924287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.276 [2024-12-06 13:37:30.924301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.276 qpair failed and we were unable to recover it. 00:29:44.538 [2024-12-06 13:37:30.934131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.538 [2024-12-06 13:37:30.934193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.538 [2024-12-06 13:37:30.934206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.538 [2024-12-06 13:37:30.934211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.538 [2024-12-06 13:37:30.934221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.538 [2024-12-06 13:37:30.934233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.538 qpair failed and we were unable to recover it. 00:29:44.538 [2024-12-06 13:37:30.944249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.538 [2024-12-06 13:37:30.944294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.538 [2024-12-06 13:37:30.944313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.538 [2024-12-06 13:37:30.944320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.538 [2024-12-06 13:37:30.944325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.538 [2024-12-06 13:37:30.944339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.538 qpair failed and we were unable to recover it. 00:29:44.538 [2024-12-06 13:37:30.954250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.538 [2024-12-06 13:37:30.954292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.538 [2024-12-06 13:37:30.954303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.538 [2024-12-06 13:37:30.954309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.538 [2024-12-06 13:37:30.954314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.538 [2024-12-06 13:37:30.954326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.538 qpair failed and we were unable to recover it. 00:29:44.538 [2024-12-06 13:37:30.964330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.538 [2024-12-06 13:37:30.964374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.538 [2024-12-06 13:37:30.964384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.538 [2024-12-06 13:37:30.964389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.538 [2024-12-06 13:37:30.964394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.538 [2024-12-06 13:37:30.964404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.538 qpair failed and we were unable to recover it. 00:29:44.538 [2024-12-06 13:37:30.974256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.538 [2024-12-06 13:37:30.974303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.538 [2024-12-06 13:37:30.974314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.538 [2024-12-06 13:37:30.974319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.538 [2024-12-06 13:37:30.974324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.538 [2024-12-06 13:37:30.974334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.538 qpair failed and we were unable to recover it. 00:29:44.538 [2024-12-06 13:37:30.984366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.538 [2024-12-06 13:37:30.984403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.538 [2024-12-06 13:37:30.984414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.538 [2024-12-06 13:37:30.984419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.538 [2024-12-06 13:37:30.984424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.538 [2024-12-06 13:37:30.984434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.538 qpair failed and we were unable to recover it. 00:29:44.538 [2024-12-06 13:37:30.994373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.538 [2024-12-06 13:37:30.994417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.538 [2024-12-06 13:37:30.994427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.538 [2024-12-06 13:37:30.994433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.538 [2024-12-06 13:37:30.994437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.538 [2024-12-06 13:37:30.994448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.538 qpair failed and we were unable to recover it. 00:29:44.538 [2024-12-06 13:37:31.004428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.538 [2024-12-06 13:37:31.004470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.538 [2024-12-06 13:37:31.004481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.538 [2024-12-06 13:37:31.004486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.538 [2024-12-06 13:37:31.004492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.538 [2024-12-06 13:37:31.004503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.538 qpair failed and we were unable to recover it. 00:29:44.538 [2024-12-06 13:37:31.014450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.538 [2024-12-06 13:37:31.014533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.538 [2024-12-06 13:37:31.014543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.538 [2024-12-06 13:37:31.014549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.538 [2024-12-06 13:37:31.014553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.538 [2024-12-06 13:37:31.014564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.538 qpair failed and we were unable to recover it. 00:29:44.538 [2024-12-06 13:37:31.024479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.538 [2024-12-06 13:37:31.024517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.538 [2024-12-06 13:37:31.024530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.538 [2024-12-06 13:37:31.024536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.538 [2024-12-06 13:37:31.024540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.538 [2024-12-06 13:37:31.024551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.538 qpair failed and we were unable to recover it. 00:29:44.538 [2024-12-06 13:37:31.034376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.538 [2024-12-06 13:37:31.034416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.538 [2024-12-06 13:37:31.034427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.538 [2024-12-06 13:37:31.034432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.538 [2024-12-06 13:37:31.034436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.538 [2024-12-06 13:37:31.034446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.538 qpair failed and we were unable to recover it. 00:29:44.538 [2024-12-06 13:37:31.044531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.538 [2024-12-06 13:37:31.044601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.538 [2024-12-06 13:37:31.044611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.538 [2024-12-06 13:37:31.044617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.538 [2024-12-06 13:37:31.044621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.538 [2024-12-06 13:37:31.044632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.538 qpair failed and we were unable to recover it. 00:29:44.538 [2024-12-06 13:37:31.054558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.538 [2024-12-06 13:37:31.054605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.539 [2024-12-06 13:37:31.054617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.539 [2024-12-06 13:37:31.054622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.539 [2024-12-06 13:37:31.054627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.539 [2024-12-06 13:37:31.054637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.539 qpair failed and we were unable to recover it. 00:29:44.539 [2024-12-06 13:37:31.064593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.539 [2024-12-06 13:37:31.064633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.539 [2024-12-06 13:37:31.064644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.539 [2024-12-06 13:37:31.064649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.539 [2024-12-06 13:37:31.064656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.539 [2024-12-06 13:37:31.064667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.539 qpair failed and we were unable to recover it. 00:29:44.539 [2024-12-06 13:37:31.074642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.539 [2024-12-06 13:37:31.074688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.539 [2024-12-06 13:37:31.074698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.539 [2024-12-06 13:37:31.074703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.539 [2024-12-06 13:37:31.074708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.539 [2024-12-06 13:37:31.074718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.539 qpair failed and we were unable to recover it. 00:29:44.539 [2024-12-06 13:37:31.084645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.539 [2024-12-06 13:37:31.084688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.539 [2024-12-06 13:37:31.084697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.539 [2024-12-06 13:37:31.084702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.539 [2024-12-06 13:37:31.084707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b520c0 00:29:44.539 [2024-12-06 13:37:31.084717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:44.539 qpair failed and we were unable to recover it. 00:29:44.539 [2024-12-06 13:37:31.084860] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:44.539 A controller has encountered a failure and is being reset. 00:29:44.539 Controller properly reset. 00:29:44.539 Initializing NVMe Controllers 00:29:44.539 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:44.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:44.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:44.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:44.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:44.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:44.539 Initialization complete. Launching workers. 00:29:44.539 Starting thread on core 1 00:29:44.539 Starting thread on core 2 00:29:44.539 Starting thread on core 3 00:29:44.539 Starting thread on core 0 00:29:44.539 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:44.539 00:29:44.539 real 0m11.397s 00:29:44.539 user 0m21.539s 00:29:44.539 sys 0m3.933s 00:29:44.539 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:44.539 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.539 ************************************ 00:29:44.539 END TEST nvmf_target_disconnect_tc2 00:29:44.539 ************************************ 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:44.801 rmmod nvme_tcp 00:29:44.801 rmmod nvme_fabrics 00:29:44.801 rmmod nvme_keyring 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2340188 ']' 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2340188 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2340188 ']' 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2340188 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2340188 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2340188' 00:29:44.801 killing process with pid 2340188 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2340188 00:29:44.801 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2340188 00:29:45.061 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:45.061 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:45.061 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:45.061 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:45.061 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:45.061 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:45.061 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:45.061 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.061 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.061 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.061 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.061 13:37:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.975 13:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:46.975 00:29:46.975 real 0m21.774s 00:29:46.975 user 0m49.198s 00:29:46.975 sys 0m10.096s 00:29:46.975 13:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.975 13:37:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:46.975 ************************************ 00:29:46.975 END TEST nvmf_target_disconnect 00:29:46.975 ************************************ 00:29:46.975 13:37:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:46.975 00:29:46.975 real 6m28.586s 00:29:46.975 user 11m21.198s 00:29:46.975 sys 2m15.061s 00:29:46.975 13:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.975 13:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.975 ************************************ 00:29:46.975 END TEST nvmf_host 00:29:46.975 ************************************ 00:29:47.237 13:37:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:47.237 13:37:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:47.237 13:37:33 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:47.237 13:37:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:47.237 13:37:33 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.237 13:37:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.237 ************************************ 00:29:47.237 START TEST nvmf_target_core_interrupt_mode 00:29:47.237 ************************************ 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:47.237 * Looking for test storage... 00:29:47.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:47.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.237 --rc genhtml_branch_coverage=1 00:29:47.237 --rc genhtml_function_coverage=1 00:29:47.237 --rc genhtml_legend=1 00:29:47.237 --rc geninfo_all_blocks=1 00:29:47.237 --rc geninfo_unexecuted_blocks=1 00:29:47.237 00:29:47.237 ' 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:47.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.237 --rc genhtml_branch_coverage=1 00:29:47.237 --rc genhtml_function_coverage=1 00:29:47.237 --rc genhtml_legend=1 00:29:47.237 --rc geninfo_all_blocks=1 00:29:47.237 --rc geninfo_unexecuted_blocks=1 00:29:47.237 00:29:47.237 ' 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:47.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.237 --rc genhtml_branch_coverage=1 00:29:47.237 --rc genhtml_function_coverage=1 00:29:47.237 --rc genhtml_legend=1 00:29:47.237 --rc geninfo_all_blocks=1 00:29:47.237 --rc geninfo_unexecuted_blocks=1 00:29:47.237 00:29:47.237 ' 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:47.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.237 --rc genhtml_branch_coverage=1 00:29:47.237 --rc genhtml_function_coverage=1 00:29:47.237 --rc genhtml_legend=1 00:29:47.237 --rc geninfo_all_blocks=1 00:29:47.237 --rc geninfo_unexecuted_blocks=1 00:29:47.237 00:29:47.237 ' 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.237 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.238 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.238 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.238 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:47.499 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:47.500 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:47.500 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:47.500 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.500 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:47.500 ************************************ 00:29:47.500 START TEST nvmf_abort 00:29:47.500 ************************************ 00:29:47.500 13:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:47.500 * Looking for test storage... 00:29:47.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:47.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.500 --rc genhtml_branch_coverage=1 00:29:47.500 --rc genhtml_function_coverage=1 00:29:47.500 --rc genhtml_legend=1 00:29:47.500 --rc geninfo_all_blocks=1 00:29:47.500 --rc geninfo_unexecuted_blocks=1 00:29:47.500 00:29:47.500 ' 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:47.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.500 --rc genhtml_branch_coverage=1 00:29:47.500 --rc genhtml_function_coverage=1 00:29:47.500 --rc genhtml_legend=1 00:29:47.500 --rc geninfo_all_blocks=1 00:29:47.500 --rc geninfo_unexecuted_blocks=1 00:29:47.500 00:29:47.500 ' 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:47.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.500 --rc genhtml_branch_coverage=1 00:29:47.500 --rc genhtml_function_coverage=1 00:29:47.500 --rc genhtml_legend=1 00:29:47.500 --rc geninfo_all_blocks=1 00:29:47.500 --rc geninfo_unexecuted_blocks=1 00:29:47.500 00:29:47.500 ' 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:47.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.500 --rc genhtml_branch_coverage=1 00:29:47.500 --rc genhtml_function_coverage=1 00:29:47.500 --rc genhtml_legend=1 00:29:47.500 --rc geninfo_all_blocks=1 00:29:47.500 --rc geninfo_unexecuted_blocks=1 00:29:47.500 00:29:47.500 ' 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.500 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.762 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:47.763 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:47.763 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:47.763 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.763 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.763 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.763 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:47.763 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:47.763 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:47.763 13:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:55.901 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:55.901 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:55.901 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:55.901 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:55.901 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:55.901 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:55.901 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:55.901 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:55.901 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:55.901 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:55.902 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:55.902 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:55.902 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:55.902 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:55.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:55.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:29:55.902 00:29:55.902 --- 10.0.0.2 ping statistics --- 00:29:55.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.902 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:29:55.902 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:55.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:55.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:29:55.903 00:29:55.903 --- 10.0.0.1 ping statistics --- 00:29:55.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.903 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2345635 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2345635 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2345635 ']' 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.903 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:55.903 [2024-12-06 13:37:41.745079] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:55.903 [2024-12-06 13:37:41.746711] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:29:55.903 [2024-12-06 13:37:41.746787] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:55.903 [2024-12-06 13:37:41.849011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:55.903 [2024-12-06 13:37:41.901086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:55.903 [2024-12-06 13:37:41.901139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:55.903 [2024-12-06 13:37:41.901148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:55.903 [2024-12-06 13:37:41.901156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:55.903 [2024-12-06 13:37:41.901163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:55.903 [2024-12-06 13:37:41.903246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:55.903 [2024-12-06 13:37:41.903407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.903 [2024-12-06 13:37:41.903409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:55.903 [2024-12-06 13:37:41.982757] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:55.903 [2024-12-06 13:37:41.983948] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:55.903 [2024-12-06 13:37:41.984271] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:55.903 [2024-12-06 13:37:41.984412] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:55.903 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:55.903 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:55.903 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:55.903 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:55.903 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 [2024-12-06 13:37:42.608310] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 Malloc0 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 Delay0 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 [2024-12-06 13:37:42.708276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.165 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:56.426 [2024-12-06 13:37:42.853671] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:58.342 Initializing NVMe Controllers 00:29:58.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:58.342 controller IO queue size 128 less than required 00:29:58.342 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:58.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:58.342 Initialization complete. Launching workers. 00:29:58.342 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28572 00:29:58.342 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28629, failed to submit 66 00:29:58.342 success 28572, unsuccessful 57, failed 0 00:29:58.342 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:58.342 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.342 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:58.342 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.342 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:58.342 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:58.342 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:58.342 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:58.342 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.342 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:58.342 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.342 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.342 rmmod nvme_tcp 00:29:58.602 rmmod nvme_fabrics 00:29:58.602 rmmod nvme_keyring 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2345635 ']' 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2345635 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2345635 ']' 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2345635 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2345635 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2345635' 00:29:58.602 killing process with pid 2345635 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2345635 00:29:58.602 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2345635 00:29:58.862 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:58.862 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:58.862 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:58.862 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:58.862 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:58.862 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:58.862 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:58.862 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:58.862 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:58.862 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.862 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.862 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.773 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:00.773 00:30:00.773 real 0m13.435s 00:30:00.773 user 0m11.096s 00:30:00.773 sys 0m7.003s 00:30:00.773 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:00.773 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.773 ************************************ 00:30:00.773 END TEST nvmf_abort 00:30:00.773 ************************************ 00:30:00.773 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:00.773 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:00.773 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:00.773 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:01.035 ************************************ 00:30:01.035 START TEST nvmf_ns_hotplug_stress 00:30:01.035 ************************************ 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:01.035 * Looking for test storage... 00:30:01.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:01.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.035 --rc genhtml_branch_coverage=1 00:30:01.035 --rc genhtml_function_coverage=1 00:30:01.035 --rc genhtml_legend=1 00:30:01.035 --rc geninfo_all_blocks=1 00:30:01.035 --rc geninfo_unexecuted_blocks=1 00:30:01.035 00:30:01.035 ' 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:01.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.035 --rc genhtml_branch_coverage=1 00:30:01.035 --rc genhtml_function_coverage=1 00:30:01.035 --rc genhtml_legend=1 00:30:01.035 --rc geninfo_all_blocks=1 00:30:01.035 --rc geninfo_unexecuted_blocks=1 00:30:01.035 00:30:01.035 ' 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:01.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.035 --rc genhtml_branch_coverage=1 00:30:01.035 --rc genhtml_function_coverage=1 00:30:01.035 --rc genhtml_legend=1 00:30:01.035 --rc geninfo_all_blocks=1 00:30:01.035 --rc geninfo_unexecuted_blocks=1 00:30:01.035 00:30:01.035 ' 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:01.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.035 --rc genhtml_branch_coverage=1 00:30:01.035 --rc genhtml_function_coverage=1 00:30:01.035 --rc genhtml_legend=1 00:30:01.035 --rc geninfo_all_blocks=1 00:30:01.035 --rc geninfo_unexecuted_blocks=1 00:30:01.035 00:30:01.035 ' 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.035 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.296 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.297 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:01.297 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:01.297 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:01.297 13:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.432 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:09.432 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:09.433 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:09.433 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:09.433 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.433 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:09.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:30:09.433 00:30:09.433 --- 10.0.0.2 ping statistics --- 00:30:09.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.433 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:30:09.433 00:30:09.433 --- 10.0.0.1 ping statistics --- 00:30:09.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.433 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2350604 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2350604 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2350604 ']' 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.433 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:09.433 [2024-12-06 13:37:55.228368] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:09.434 [2024-12-06 13:37:55.229505] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:30:09.434 [2024-12-06 13:37:55.229558] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.434 [2024-12-06 13:37:55.331816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:09.434 [2024-12-06 13:37:55.382666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.434 [2024-12-06 13:37:55.382719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.434 [2024-12-06 13:37:55.382728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.434 [2024-12-06 13:37:55.382735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.434 [2024-12-06 13:37:55.382742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.434 [2024-12-06 13:37:55.384563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.434 [2024-12-06 13:37:55.384888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:09.434 [2024-12-06 13:37:55.384888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.434 [2024-12-06 13:37:55.462160] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:09.434 [2024-12-06 13:37:55.463112] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:09.434 [2024-12-06 13:37:55.463557] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:09.434 [2024-12-06 13:37:55.463699] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:09.434 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.434 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:09.434 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:09.434 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:09.434 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:09.693 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.693 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:09.693 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:09.693 [2024-12-06 13:37:56.273943] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.693 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:09.954 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:10.308 [2024-12-06 13:37:56.662694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.308 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:10.308 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:10.597 Malloc0 00:30:10.597 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:10.858 Delay0 00:30:10.858 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.858 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:11.118 NULL1 00:30:11.118 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:11.380 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2350999 00:30:11.380 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:11.380 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:11.380 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.766 Read completed with error (sct=0, sc=11) 00:30:12.766 13:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.766 13:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:12.766 13:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:13.026 true 00:30:13.026 13:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:13.026 13:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.967 13:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.967 13:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:13.967 13:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:14.229 true 00:30:14.229 13:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:14.229 13:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.229 13:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.490 13:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:14.490 13:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:14.751 true 00:30:14.751 13:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:14.752 13:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.139 13:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.139 13:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:16.139 13:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:16.139 true 00:30:16.139 13:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:16.139 13:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:17.081 13:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.342 13:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:17.342 13:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:17.342 true 00:30:17.342 13:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:17.342 13:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.604 13:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.864 13:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:17.864 13:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:17.864 true 00:30:17.864 13:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:17.864 13:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.126 13:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.386 13:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:18.386 13:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:18.645 true 00:30:18.645 13:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:18.645 13:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.586 13:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.586 13:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:19.586 13:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:19.845 true 00:30:19.845 13:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:19.845 13:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.845 13:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.105 13:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:20.105 13:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:20.364 true 00:30:20.364 13:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:20.364 13:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.302 13:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.566 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:21.566 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:21.826 true 00:30:21.826 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:21.826 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:22.768 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:22.768 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.768 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:22.768 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:23.028 true 00:30:23.028 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:23.028 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.028 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.288 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:23.288 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:23.547 true 00:30:23.547 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:23.547 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.806 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.806 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:23.806 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:24.067 true 00:30:24.067 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:24.067 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.328 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.329 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:24.329 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:24.589 true 00:30:24.589 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:24.589 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.973 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.973 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:25.973 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:25.973 true 00:30:26.233 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:26.233 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.174 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.174 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:27.174 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:27.174 true 00:30:27.433 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:27.433 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.433 13:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.694 13:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:27.694 13:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:27.955 true 00:30:27.955 13:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:27.955 13:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:28.900 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.161 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:29.161 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:29.422 true 00:30:29.422 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:29.422 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:30.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:30.364 13:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.364 13:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:30.364 13:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:30.625 true 00:30:30.625 13:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:30.625 13:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.885 13:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.885 13:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:30.885 13:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:31.145 true 00:30:31.145 13:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:31.145 13:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.406 13:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.406 13:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:31.406 13:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:31.667 true 00:30:31.667 13:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:31.667 13:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.927 13:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.927 13:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:31.927 13:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:32.189 true 00:30:32.189 13:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:32.189 13:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.449 13:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.449 13:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:32.449 13:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:32.709 true 00:30:32.709 13:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:32.709 13:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.968 13:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.228 13:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:33.228 13:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:33.228 true 00:30:33.228 13:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:33.228 13:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.487 13:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.747 13:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:33.747 13:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:33.747 true 00:30:33.747 13:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:33.747 13:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.007 13:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.267 13:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:34.267 13:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:34.267 true 00:30:34.267 13:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:34.267 13:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.528 13:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.788 13:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:34.788 13:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:34.788 true 00:30:34.788 13:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:34.788 13:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.048 13:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.309 13:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:35.309 13:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:35.309 true 00:30:35.568 13:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:35.568 13:38:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.568 13:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.829 13:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:35.829 13:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:36.088 true 00:30:36.088 13:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:36.088 13:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.088 13:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.349 13:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:36.349 13:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:36.609 true 00:30:36.609 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:36.609 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.992 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:37.992 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:37.992 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:37.992 true 00:30:37.992 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:37.992 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.932 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.192 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:39.192 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:39.192 true 00:30:39.192 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:39.192 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.452 13:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.712 13:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:39.712 13:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:39.712 true 00:30:39.973 13:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:39.973 13:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:40.914 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.174 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:41.174 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:41.434 true 00:30:41.434 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:41.434 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.375 13:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.376 Initializing NVMe Controllers 00:30:42.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.376 Controller IO queue size 128, less than required. 00:30:42.376 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:42.376 Controller IO queue size 128, less than required. 00:30:42.376 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:42.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:42.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:42.376 Initialization complete. Launching workers. 00:30:42.376 ======================================================== 00:30:42.376 Latency(us) 00:30:42.376 Device Information : IOPS MiB/s Average min max 00:30:42.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2238.27 1.09 32807.33 2005.58 1015186.43 00:30:42.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16056.10 7.84 7971.85 1128.84 402526.92 00:30:42.376 ======================================================== 00:30:42.376 Total : 18294.37 8.93 11010.40 1128.84 1015186.43 00:30:42.376 00:30:42.376 13:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:42.376 13:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:42.636 true 00:30:42.636 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2350999 00:30:42.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2350999) - No such process 00:30:42.636 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2350999 00:30:42.636 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.636 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:42.896 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:42.896 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:42.896 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:42.896 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:42.896 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:43.156 null0 00:30:43.156 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:43.156 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:43.156 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:43.156 null1 00:30:43.156 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:43.156 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:43.156 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:43.416 null2 00:30:43.416 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:43.416 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:43.417 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:43.677 null3 00:30:43.677 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:43.677 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:43.677 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:43.677 null4 00:30:43.677 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:43.677 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:43.677 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:43.960 null5 00:30:43.960 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:43.960 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:43.960 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:43.960 null6 00:30:44.221 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:44.221 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:44.222 null7 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2357429 2357431 2357432 2357434 2357436 2357438 2357440 2357442 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:44.222 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:44.223 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:44.223 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:44.223 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.223 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:44.483 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:44.483 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.483 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:44.483 13:38:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:44.483 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:44.483 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:44.483 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:44.483 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:44.744 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:44.745 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:44.745 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:44.745 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.004 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:45.005 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.005 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.005 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:45.005 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.005 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.005 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:45.005 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.005 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.005 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.265 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:45.526 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.526 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.526 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:45.526 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.526 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.526 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:45.526 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.526 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.526 13:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.526 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:45.785 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:45.786 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:46.046 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:46.046 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:46.046 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:46.046 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.046 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:46.046 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.046 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.046 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:46.046 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:46.046 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:46.046 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.046 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.046 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:46.306 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.565 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:46.565 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.565 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.565 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:46.565 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:46.565 13:38:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.565 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:46.823 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.082 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:47.341 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.341 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.341 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:47.341 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:47.341 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.341 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.341 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:47.342 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.601 13:38:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:47.601 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:47.861 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:48.122 rmmod nvme_tcp 00:30:48.122 rmmod nvme_fabrics 00:30:48.122 rmmod nvme_keyring 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2350604 ']' 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2350604 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2350604 ']' 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2350604 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:48.122 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2350604 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2350604' 00:30:48.383 killing process with pid 2350604 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2350604 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2350604 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.383 13:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:50.920 00:30:50.920 real 0m49.551s 00:30:50.920 user 3m1.305s 00:30:50.920 sys 0m21.380s 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:50.920 ************************************ 00:30:50.920 END TEST nvmf_ns_hotplug_stress 00:30:50.920 ************************************ 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:50.920 ************************************ 00:30:50.920 START TEST nvmf_delete_subsystem 00:30:50.920 ************************************ 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:50.920 * Looking for test storage... 00:30:50.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:50.920 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:50.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.921 --rc genhtml_branch_coverage=1 00:30:50.921 --rc genhtml_function_coverage=1 00:30:50.921 --rc genhtml_legend=1 00:30:50.921 --rc geninfo_all_blocks=1 00:30:50.921 --rc geninfo_unexecuted_blocks=1 00:30:50.921 00:30:50.921 ' 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:50.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.921 --rc genhtml_branch_coverage=1 00:30:50.921 --rc genhtml_function_coverage=1 00:30:50.921 --rc genhtml_legend=1 00:30:50.921 --rc geninfo_all_blocks=1 00:30:50.921 --rc geninfo_unexecuted_blocks=1 00:30:50.921 00:30:50.921 ' 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:50.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.921 --rc genhtml_branch_coverage=1 00:30:50.921 --rc genhtml_function_coverage=1 00:30:50.921 --rc genhtml_legend=1 00:30:50.921 --rc geninfo_all_blocks=1 00:30:50.921 --rc geninfo_unexecuted_blocks=1 00:30:50.921 00:30:50.921 ' 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:50.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.921 --rc genhtml_branch_coverage=1 00:30:50.921 --rc genhtml_function_coverage=1 00:30:50.921 --rc genhtml_legend=1 00:30:50.921 --rc geninfo_all_blocks=1 00:30:50.921 --rc geninfo_unexecuted_blocks=1 00:30:50.921 00:30:50.921 ' 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:50.921 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:50.922 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:50.922 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:59.062 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:59.062 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:59.062 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:59.062 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.062 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:59.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:30:59.063 00:30:59.063 --- 10.0.0.2 ping statistics --- 00:30:59.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.063 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:30:59.063 00:30:59.063 --- 10.0.0.1 ping statistics --- 00:30:59.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.063 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2362457 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2362457 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2362457 ']' 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:59.063 13:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.063 [2024-12-06 13:38:44.863871] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:59.063 [2024-12-06 13:38:44.864997] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:30:59.063 [2024-12-06 13:38:44.865047] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.063 [2024-12-06 13:38:44.962317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:59.063 [2024-12-06 13:38:45.013703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.063 [2024-12-06 13:38:45.013754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.063 [2024-12-06 13:38:45.013763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.063 [2024-12-06 13:38:45.013770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.063 [2024-12-06 13:38:45.013776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.063 [2024-12-06 13:38:45.015517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.063 [2024-12-06 13:38:45.015567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.063 [2024-12-06 13:38:45.093652] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:59.063 [2024-12-06 13:38:45.094414] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:59.063 [2024-12-06 13:38:45.094652] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:59.063 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.063 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:30:59.063 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:59.063 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:59.063 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.324 [2024-12-06 13:38:45.724577] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.324 [2024-12-06 13:38:45.757227] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.324 NULL1 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.324 Delay0 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2362619 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:59.324 13:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:59.324 [2024-12-06 13:38:45.886516] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:01.237 13:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:01.237 13:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.237 13:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:01.572 Write completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 starting I/O failed: -6 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 starting I/O failed: -6 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 starting I/O failed: -6 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 starting I/O failed: -6 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 Write completed with error (sct=0, sc=8) 00:31:01.572 Write completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 starting I/O failed: -6 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.572 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 [2024-12-06 13:38:48.108308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c452c0 is same with the state(6) to be set 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 starting I/O failed: -6 00:31:01.573 [2024-12-06 13:38:48.109087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3240000c40 is same with the state(6) to be set 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Write completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.573 Read completed with error (sct=0, sc=8) 00:31:01.574 Write completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Write completed with error (sct=0, sc=8) 00:31:01.574 Write completed with error (sct=0, sc=8) 00:31:01.574 Write completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Write completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Write completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Write completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Write completed with error (sct=0, sc=8) 00:31:01.574 Write completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:01.574 Write completed with error (sct=0, sc=8) 00:31:01.574 Read completed with error (sct=0, sc=8) 00:31:02.612 [2024-12-06 13:38:49.069716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c469b0 is same with the state(6) to be set 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 [2024-12-06 13:38:49.108885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c454a0 is same with the state(6) to be set 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 [2024-12-06 13:38:49.109937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c45860 is same with the state(6) to be set 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 [2024-12-06 13:38:49.110794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f324000d020 is same with the state(6) to be set 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Write completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 Read completed with error (sct=0, sc=8) 00:31:02.613 [2024-12-06 13:38:49.110887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f324000d7c0 is same with the state(6) to be set 00:31:02.613 Initializing NVMe Controllers 00:31:02.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:02.613 Controller IO queue size 128, less than required. 00:31:02.613 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:02.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:02.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:02.613 Initialization complete. Launching workers. 00:31:02.613 ======================================================== 00:31:02.613 Latency(us) 00:31:02.613 Device Information : IOPS MiB/s Average min max 00:31:02.613 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.79 0.08 897488.21 473.41 1999841.86 00:31:02.613 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.82 0.08 924578.19 367.36 2001736.20 00:31:02.613 ======================================================== 00:31:02.613 Total : 336.61 0.16 910833.42 367.36 2001736.20 00:31:02.613 00:31:02.613 [2024-12-06 13:38:49.111427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c469b0 (9): Bad file descriptor 00:31:02.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:02.613 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.613 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:02.613 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2362619 00:31:02.613 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2362619 00:31:03.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2362619) - No such process 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2362619 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2362619 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2362619 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.185 [2024-12-06 13:38:49.640913] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2363300 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363300 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:03.185 13:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:03.185 [2024-12-06 13:38:49.739676] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:03.759 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:03.759 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363300 00:31:03.759 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:04.022 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:04.022 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363300 00:31:04.022 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:04.594 13:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:04.594 13:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363300 00:31:04.594 13:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:05.164 13:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:05.164 13:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363300 00:31:05.164 13:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:05.736 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:05.736 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363300 00:31:05.736 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:06.306 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:06.306 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363300 00:31:06.306 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:06.567 Initializing NVMe Controllers 00:31:06.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:06.567 Controller IO queue size 128, less than required. 00:31:06.567 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:06.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:06.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:06.567 Initialization complete. Launching workers. 00:31:06.567 ======================================================== 00:31:06.567 Latency(us) 00:31:06.567 Device Information : IOPS MiB/s Average min max 00:31:06.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002296.59 1000120.80 1041274.01 00:31:06.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004035.58 1000358.79 1010639.81 00:31:06.567 ======================================================== 00:31:06.567 Total : 256.00 0.12 1003166.09 1000120.80 1041274.01 00:31:06.567 00:31:06.567 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:06.567 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2363300 00:31:06.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2363300) - No such process 00:31:06.567 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2363300 00:31:06.567 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:06.567 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:06.567 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.567 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:06.567 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.567 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:06.567 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.567 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.567 rmmod nvme_tcp 00:31:06.567 rmmod nvme_fabrics 00:31:06.828 rmmod nvme_keyring 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2362457 ']' 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2362457 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2362457 ']' 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2362457 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2362457 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2362457' 00:31:06.828 killing process with pid 2362457 00:31:06.828 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2362457 00:31:06.829 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2362457 00:31:06.829 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:06.829 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:06.829 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:06.829 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:06.829 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:06.829 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:06.829 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:06.829 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:06.829 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:06.829 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.829 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.829 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:09.367 00:31:09.367 real 0m18.405s 00:31:09.367 user 0m27.080s 00:31:09.367 sys 0m7.293s 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:09.367 ************************************ 00:31:09.367 END TEST nvmf_delete_subsystem 00:31:09.367 ************************************ 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:09.367 ************************************ 00:31:09.367 START TEST nvmf_host_management 00:31:09.367 ************************************ 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:09.367 * Looking for test storage... 00:31:09.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:09.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.367 --rc genhtml_branch_coverage=1 00:31:09.367 --rc genhtml_function_coverage=1 00:31:09.367 --rc genhtml_legend=1 00:31:09.367 --rc geninfo_all_blocks=1 00:31:09.367 --rc geninfo_unexecuted_blocks=1 00:31:09.367 00:31:09.367 ' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:09.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.367 --rc genhtml_branch_coverage=1 00:31:09.367 --rc genhtml_function_coverage=1 00:31:09.367 --rc genhtml_legend=1 00:31:09.367 --rc geninfo_all_blocks=1 00:31:09.367 --rc geninfo_unexecuted_blocks=1 00:31:09.367 00:31:09.367 ' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:09.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.367 --rc genhtml_branch_coverage=1 00:31:09.367 --rc genhtml_function_coverage=1 00:31:09.367 --rc genhtml_legend=1 00:31:09.367 --rc geninfo_all_blocks=1 00:31:09.367 --rc geninfo_unexecuted_blocks=1 00:31:09.367 00:31:09.367 ' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:09.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.367 --rc genhtml_branch_coverage=1 00:31:09.367 --rc genhtml_function_coverage=1 00:31:09.367 --rc genhtml_legend=1 00:31:09.367 --rc geninfo_all_blocks=1 00:31:09.367 --rc geninfo_unexecuted_blocks=1 00:31:09.367 00:31:09.367 ' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:09.367 13:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:17.505 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:17.506 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:17.506 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:17.506 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:17.506 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.506 13:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:17.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:31:17.506 00:31:17.506 --- 10.0.0.2 ping statistics --- 00:31:17.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.506 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:31:17.506 00:31:17.506 --- 10.0.0.1 ping statistics --- 00:31:17.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.506 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2368400 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2368400 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2368400 ']' 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.506 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.507 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.507 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.507 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.507 [2024-12-06 13:39:03.369259] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:17.507 [2024-12-06 13:39:03.370389] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:31:17.507 [2024-12-06 13:39:03.370440] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.507 [2024-12-06 13:39:03.471335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:17.507 [2024-12-06 13:39:03.524358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.507 [2024-12-06 13:39:03.524414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.507 [2024-12-06 13:39:03.524423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.507 [2024-12-06 13:39:03.524431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.507 [2024-12-06 13:39:03.524437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.507 [2024-12-06 13:39:03.526838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:17.507 [2024-12-06 13:39:03.527006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:17.507 [2024-12-06 13:39:03.527168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.507 [2024-12-06 13:39:03.527168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:17.507 [2024-12-06 13:39:03.605518] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:17.507 [2024-12-06 13:39:03.606863] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:17.507 [2024-12-06 13:39:03.606909] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:17.507 [2024-12-06 13:39:03.607384] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:17.507 [2024-12-06 13:39:03.607424] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.768 [2024-12-06 13:39:04.240029] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.768 Malloc0 00:31:17.768 [2024-12-06 13:39:04.340384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2368489 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2368489 /var/tmp/bdevperf.sock 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2368489 ']' 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:17.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:17.768 { 00:31:17.768 "params": { 00:31:17.768 "name": "Nvme$subsystem", 00:31:17.768 "trtype": "$TEST_TRANSPORT", 00:31:17.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:17.768 "adrfam": "ipv4", 00:31:17.768 "trsvcid": "$NVMF_PORT", 00:31:17.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:17.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:17.768 "hdgst": ${hdgst:-false}, 00:31:17.768 "ddgst": ${ddgst:-false} 00:31:17.768 }, 00:31:17.768 "method": "bdev_nvme_attach_controller" 00:31:17.768 } 00:31:17.768 EOF 00:31:17.768 )") 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:17.768 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:17.768 "params": { 00:31:17.768 "name": "Nvme0", 00:31:17.768 "trtype": "tcp", 00:31:17.768 "traddr": "10.0.0.2", 00:31:17.768 "adrfam": "ipv4", 00:31:17.768 "trsvcid": "4420", 00:31:17.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:17.768 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:17.768 "hdgst": false, 00:31:17.768 "ddgst": false 00:31:17.768 }, 00:31:17.768 "method": "bdev_nvme_attach_controller" 00:31:17.768 }' 00:31:18.029 [2024-12-06 13:39:04.448742] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:31:18.029 [2024-12-06 13:39:04.448813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368489 ] 00:31:18.029 [2024-12-06 13:39:04.542780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.030 [2024-12-06 13:39:04.596340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.291 Running I/O for 10 seconds... 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:18.889 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:18.890 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:18.890 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.890 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:18.890 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:18.890 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.890 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=540 00:31:18.890 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 540 -ge 100 ']' 00:31:18.890 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:18.890 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:18.890 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:18.890 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:18.890 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.890 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:18.890 [2024-12-06 13:39:05.343744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.343992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b3e20 is same with the state(6) to be set 00:31:18.890 [2024-12-06 13:39:05.344399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.890 [2024-12-06 13:39:05.344937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.890 [2024-12-06 13:39:05.344945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.344954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.344961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.344971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.344979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.344989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.344997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.891 [2024-12-06 13:39:05.345614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.345623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185faf0 is same with the state(6) to be set 00:31:18.891 [2024-12-06 13:39:05.346944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:18.891 task offset: 73728 on job bdev=Nvme0n1 fails 00:31:18.891 00:31:18.891 Latency(us) 00:31:18.891 [2024-12-06T12:39:05.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.891 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:18.891 Job: Nvme0n1 ended in about 0.42 seconds with error 00:31:18.891 Verification LBA range: start 0x0 length 0x400 00:31:18.891 Nvme0n1 : 0.42 1369.79 85.61 152.20 0.00 40317.74 9284.27 43035.31 00:31:18.891 [2024-12-06T12:39:05.550Z] =================================================================================================================== 00:31:18.891 [2024-12-06T12:39:05.550Z] Total : 1369.79 85.61 152.20 0.00 40317.74 9284.27 43035.31 00:31:18.891 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.891 [2024-12-06 13:39:05.349238] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:18.891 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:18.891 [2024-12-06 13:39:05.349281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1646c20 (9): Bad file descriptor 00:31:18.891 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.891 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:18.891 [2024-12-06 13:39:05.350816] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:18.891 [2024-12-06 13:39:05.350914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:18.891 [2024-12-06 13:39:05.350942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.891 [2024-12-06 13:39:05.350956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:18.891 [2024-12-06 13:39:05.350965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:18.891 [2024-12-06 13:39:05.350973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.891 [2024-12-06 13:39:05.350981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1646c20 00:31:18.891 [2024-12-06 13:39:05.351005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1646c20 (9): Bad file descriptor 00:31:18.891 [2024-12-06 13:39:05.351043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:18.891 [2024-12-06 13:39:05.351053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:18.891 [2024-12-06 13:39:05.351064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:18.891 [2024-12-06 13:39:05.351075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:18.891 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.891 13:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:19.831 13:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2368489 00:31:19.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2368489) - No such process 00:31:19.831 13:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:19.831 13:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:19.831 13:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:19.831 13:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:19.831 13:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:19.831 13:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:19.831 13:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:19.831 13:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:19.831 { 00:31:19.831 "params": { 00:31:19.831 "name": "Nvme$subsystem", 00:31:19.831 "trtype": "$TEST_TRANSPORT", 00:31:19.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.831 "adrfam": "ipv4", 00:31:19.831 "trsvcid": "$NVMF_PORT", 00:31:19.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.831 "hdgst": ${hdgst:-false}, 00:31:19.831 "ddgst": ${ddgst:-false} 00:31:19.831 }, 00:31:19.831 "method": "bdev_nvme_attach_controller" 00:31:19.831 } 00:31:19.831 EOF 00:31:19.831 )") 00:31:19.831 13:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:19.831 13:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:19.831 13:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:19.831 13:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:19.831 "params": { 00:31:19.831 "name": "Nvme0", 00:31:19.831 "trtype": "tcp", 00:31:19.831 "traddr": "10.0.0.2", 00:31:19.831 "adrfam": "ipv4", 00:31:19.831 "trsvcid": "4420", 00:31:19.831 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:19.831 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:19.831 "hdgst": false, 00:31:19.831 "ddgst": false 00:31:19.831 }, 00:31:19.831 "method": "bdev_nvme_attach_controller" 00:31:19.831 }' 00:31:19.831 [2024-12-06 13:39:06.422550] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:31:19.831 [2024-12-06 13:39:06.422627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368910 ] 00:31:20.091 [2024-12-06 13:39:06.515468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.091 [2024-12-06 13:39:06.569383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.091 Running I/O for 1 seconds... 00:31:21.487 2021.00 IOPS, 126.31 MiB/s 00:31:21.487 Latency(us) 00:31:21.487 [2024-12-06T12:39:08.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.487 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:21.487 Verification LBA range: start 0x0 length 0x400 00:31:21.487 Nvme0n1 : 1.02 2051.70 128.23 0.00 0.00 30527.80 1884.16 32112.64 00:31:21.487 [2024-12-06T12:39:08.146Z] =================================================================================================================== 00:31:21.487 [2024-12-06T12:39:08.146Z] Total : 2051.70 128.23 0.00 0.00 30527.80 1884.16 32112.64 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:21.487 rmmod nvme_tcp 00:31:21.487 rmmod nvme_fabrics 00:31:21.487 rmmod nvme_keyring 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2368400 ']' 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2368400 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2368400 ']' 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2368400 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:21.487 13:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2368400 00:31:21.487 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:21.487 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:21.487 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2368400' 00:31:21.487 killing process with pid 2368400 00:31:21.487 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2368400 00:31:21.487 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2368400 00:31:21.487 [2024-12-06 13:39:08.113091] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:21.487 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:21.487 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:21.487 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:21.487 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:21.487 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:21.487 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:21.487 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:21.748 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:21.748 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:21.748 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.748 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.748 13:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.663 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:23.663 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:23.663 00:31:23.663 real 0m14.625s 00:31:23.663 user 0m19.229s 00:31:23.663 sys 0m7.401s 00:31:23.663 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:23.663 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:23.663 ************************************ 00:31:23.663 END TEST nvmf_host_management 00:31:23.663 ************************************ 00:31:23.663 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:23.663 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:23.663 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:23.663 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:23.663 ************************************ 00:31:23.663 START TEST nvmf_lvol 00:31:23.663 ************************************ 00:31:23.663 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:23.925 * Looking for test storage... 00:31:23.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:23.925 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:23.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.926 --rc genhtml_branch_coverage=1 00:31:23.926 --rc genhtml_function_coverage=1 00:31:23.926 --rc genhtml_legend=1 00:31:23.926 --rc geninfo_all_blocks=1 00:31:23.926 --rc geninfo_unexecuted_blocks=1 00:31:23.926 00:31:23.926 ' 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:23.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.926 --rc genhtml_branch_coverage=1 00:31:23.926 --rc genhtml_function_coverage=1 00:31:23.926 --rc genhtml_legend=1 00:31:23.926 --rc geninfo_all_blocks=1 00:31:23.926 --rc geninfo_unexecuted_blocks=1 00:31:23.926 00:31:23.926 ' 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:23.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.926 --rc genhtml_branch_coverage=1 00:31:23.926 --rc genhtml_function_coverage=1 00:31:23.926 --rc genhtml_legend=1 00:31:23.926 --rc geninfo_all_blocks=1 00:31:23.926 --rc geninfo_unexecuted_blocks=1 00:31:23.926 00:31:23.926 ' 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:23.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.926 --rc genhtml_branch_coverage=1 00:31:23.926 --rc genhtml_function_coverage=1 00:31:23.926 --rc genhtml_legend=1 00:31:23.926 --rc geninfo_all_blocks=1 00:31:23.926 --rc geninfo_unexecuted_blocks=1 00:31:23.926 00:31:23.926 ' 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:23.926 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:23.927 13:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.065 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:32.066 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:32.066 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:32.066 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:32.066 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.066 13:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:32.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:31:32.066 00:31:32.066 --- 10.0.0.2 ping statistics --- 00:31:32.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.066 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:31:32.066 00:31:32.066 --- 10.0.0.1 ping statistics --- 00:31:32.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.066 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2373916 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2373916 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2373916 ']' 00:31:32.066 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.067 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:32.067 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.067 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:32.067 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:32.067 [2024-12-06 13:39:18.119918] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:32.067 [2024-12-06 13:39:18.121051] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:31:32.067 [2024-12-06 13:39:18.121104] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.067 [2024-12-06 13:39:18.197339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:32.067 [2024-12-06 13:39:18.243878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.067 [2024-12-06 13:39:18.243931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.067 [2024-12-06 13:39:18.243939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:32.067 [2024-12-06 13:39:18.243944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:32.067 [2024-12-06 13:39:18.243949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.067 [2024-12-06 13:39:18.245572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.067 [2024-12-06 13:39:18.245761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:32.067 [2024-12-06 13:39:18.245881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.067 [2024-12-06 13:39:18.319164] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:32.067 [2024-12-06 13:39:18.320046] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:32.067 [2024-12-06 13:39:18.320819] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:32.067 [2024-12-06 13:39:18.320948] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:32.067 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.067 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:32.067 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:32.067 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:32.067 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:32.067 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.067 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:32.067 [2024-12-06 13:39:18.562847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.067 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:32.327 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:32.327 13:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:32.588 13:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:32.588 13:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:32.588 13:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:32.848 13:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=918db01d-7793-445a-89a5-38435e6cf615 00:31:32.848 13:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 918db01d-7793-445a-89a5-38435e6cf615 lvol 20 00:31:33.108 13:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=56b93298-fb82-44fc-b8f9-4f6c872bbcb7 00:31:33.108 13:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:33.369 13:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 56b93298-fb82-44fc-b8f9-4f6c872bbcb7 00:31:33.369 13:39:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:33.631 [2024-12-06 13:39:20.134814] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.631 13:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:33.892 13:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2374287 00:31:33.892 13:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:33.892 13:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:34.836 13:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 56b93298-fb82-44fc-b8f9-4f6c872bbcb7 MY_SNAPSHOT 00:31:35.097 13:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bb547647-a804-4b08-b117-d0a4b186ab42 00:31:35.097 13:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 56b93298-fb82-44fc-b8f9-4f6c872bbcb7 30 00:31:35.358 13:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone bb547647-a804-4b08-b117-d0a4b186ab42 MY_CLONE 00:31:35.618 13:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4038670d-2bcc-471b-9be0-f75316980020 00:31:35.618 13:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4038670d-2bcc-471b-9be0-f75316980020 00:31:36.189 13:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2374287 00:31:44.320 Initializing NVMe Controllers 00:31:44.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:44.320 Controller IO queue size 128, less than required. 00:31:44.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:44.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:44.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:44.320 Initialization complete. Launching workers. 00:31:44.320 ======================================================== 00:31:44.320 Latency(us) 00:31:44.320 Device Information : IOPS MiB/s Average min max 00:31:44.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15362.00 60.01 8333.69 587.52 81747.58 00:31:44.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14868.30 58.08 8608.25 2668.50 82986.70 00:31:44.320 ======================================================== 00:31:44.320 Total : 30230.30 118.09 8468.73 587.52 82986.70 00:31:44.320 00:31:44.320 13:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:44.581 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 56b93298-fb82-44fc-b8f9-4f6c872bbcb7 00:31:44.581 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 918db01d-7793-445a-89a5-38435e6cf615 00:31:44.841 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:44.841 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:44.841 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:44.841 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:44.841 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:44.841 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:44.841 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:44.842 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:44.842 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:44.842 rmmod nvme_tcp 00:31:44.842 rmmod nvme_fabrics 00:31:44.842 rmmod nvme_keyring 00:31:44.842 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:44.842 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:44.842 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:44.842 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2373916 ']' 00:31:44.842 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2373916 00:31:44.842 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2373916 ']' 00:31:44.842 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2373916 00:31:44.842 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:31:44.842 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:44.842 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2373916 00:31:45.102 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:45.102 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:45.102 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2373916' 00:31:45.102 killing process with pid 2373916 00:31:45.102 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2373916 00:31:45.102 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2373916 00:31:45.102 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:45.102 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:45.102 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:45.102 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:45.102 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:45.102 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:45.102 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:45.102 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:45.103 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:45.103 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.103 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.103 13:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:47.650 00:31:47.650 real 0m23.403s 00:31:47.650 user 0m56.252s 00:31:47.650 sys 0m10.702s 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:47.650 ************************************ 00:31:47.650 END TEST nvmf_lvol 00:31:47.650 ************************************ 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:47.650 ************************************ 00:31:47.650 START TEST nvmf_lvs_grow 00:31:47.650 ************************************ 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:47.650 * Looking for test storage... 00:31:47.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.650 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:47.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.650 --rc genhtml_branch_coverage=1 00:31:47.650 --rc genhtml_function_coverage=1 00:31:47.650 --rc genhtml_legend=1 00:31:47.651 --rc geninfo_all_blocks=1 00:31:47.651 --rc geninfo_unexecuted_blocks=1 00:31:47.651 00:31:47.651 ' 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.651 --rc genhtml_branch_coverage=1 00:31:47.651 --rc genhtml_function_coverage=1 00:31:47.651 --rc genhtml_legend=1 00:31:47.651 --rc geninfo_all_blocks=1 00:31:47.651 --rc geninfo_unexecuted_blocks=1 00:31:47.651 00:31:47.651 ' 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.651 --rc genhtml_branch_coverage=1 00:31:47.651 --rc genhtml_function_coverage=1 00:31:47.651 --rc genhtml_legend=1 00:31:47.651 --rc geninfo_all_blocks=1 00:31:47.651 --rc geninfo_unexecuted_blocks=1 00:31:47.651 00:31:47.651 ' 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.651 --rc genhtml_branch_coverage=1 00:31:47.651 --rc genhtml_function_coverage=1 00:31:47.651 --rc genhtml_legend=1 00:31:47.651 --rc geninfo_all_blocks=1 00:31:47.651 --rc geninfo_unexecuted_blocks=1 00:31:47.651 00:31:47.651 ' 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.651 13:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:47.651 13:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.800 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:55.801 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:55.801 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:55.801 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:55.801 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:31:55.801 00:31:55.801 --- 10.0.0.2 ping statistics --- 00:31:55.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.801 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:31:55.801 00:31:55.801 --- 10.0.0.1 ping statistics --- 00:31:55.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.801 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:55.801 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:55.802 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:55.802 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2380622 00:31:55.802 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2380622 00:31:55.802 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:55.802 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2380622 ']' 00:31:55.802 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.802 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.802 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.802 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.802 13:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:55.802 [2024-12-06 13:39:41.536002] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:55.802 [2024-12-06 13:39:41.537122] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:31:55.802 [2024-12-06 13:39:41.537170] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.802 [2024-12-06 13:39:41.635791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.802 [2024-12-06 13:39:41.686867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.802 [2024-12-06 13:39:41.686919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.802 [2024-12-06 13:39:41.686927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.802 [2024-12-06 13:39:41.686934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.802 [2024-12-06 13:39:41.686940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.802 [2024-12-06 13:39:41.687786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.802 [2024-12-06 13:39:41.765394] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:55.802 [2024-12-06 13:39:41.765674] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:55.802 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.802 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:31:55.802 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:55.802 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:55.802 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:55.802 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.802 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:56.064 [2024-12-06 13:39:42.556701] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:56.064 ************************************ 00:31:56.064 START TEST lvs_grow_clean 00:31:56.064 ************************************ 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:56.064 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:56.325 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:56.325 13:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:56.586 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b37b8017-48d9-439c-8a0e-756babb2d375 00:31:56.586 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37b8017-48d9-439c-8a0e-756babb2d375 00:31:56.586 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:56.586 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:56.586 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:56.586 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b37b8017-48d9-439c-8a0e-756babb2d375 lvol 150 00:31:56.847 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6414e842-098f-4deb-b4bb-24a2640b7dae 00:31:56.847 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:56.848 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:57.109 [2024-12-06 13:39:43.544332] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:57.109 [2024-12-06 13:39:43.544533] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:57.109 true 00:31:57.109 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:57.109 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37b8017-48d9-439c-8a0e-756babb2d375 00:31:57.109 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:57.109 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:57.370 13:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6414e842-098f-4deb-b4bb-24a2640b7dae 00:31:57.631 13:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:57.631 [2024-12-06 13:39:44.253020] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.631 13:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:57.893 13:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2381124 00:31:57.893 13:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:57.893 13:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:57.893 13:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2381124 /var/tmp/bdevperf.sock 00:31:57.893 13:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2381124 ']' 00:31:57.893 13:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:57.893 13:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:57.893 13:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:57.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:57.893 13:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:57.893 13:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:57.893 [2024-12-06 13:39:44.507929] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:31:57.893 [2024-12-06 13:39:44.507999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381124 ] 00:31:58.155 [2024-12-06 13:39:44.598594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.155 [2024-12-06 13:39:44.651556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.727 13:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:58.727 13:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:31:58.727 13:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:59.298 Nvme0n1 00:31:59.298 13:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:59.298 [ 00:31:59.298 { 00:31:59.298 "name": "Nvme0n1", 00:31:59.298 "aliases": [ 00:31:59.298 "6414e842-098f-4deb-b4bb-24a2640b7dae" 00:31:59.298 ], 00:31:59.298 "product_name": "NVMe disk", 00:31:59.298 "block_size": 4096, 00:31:59.298 "num_blocks": 38912, 00:31:59.298 "uuid": "6414e842-098f-4deb-b4bb-24a2640b7dae", 00:31:59.298 "numa_id": 0, 00:31:59.298 "assigned_rate_limits": { 00:31:59.298 "rw_ios_per_sec": 0, 00:31:59.298 "rw_mbytes_per_sec": 0, 00:31:59.298 "r_mbytes_per_sec": 0, 00:31:59.298 "w_mbytes_per_sec": 0 00:31:59.298 }, 00:31:59.298 "claimed": false, 00:31:59.298 "zoned": false, 00:31:59.298 "supported_io_types": { 00:31:59.298 "read": true, 00:31:59.298 "write": true, 00:31:59.298 "unmap": true, 00:31:59.298 "flush": true, 00:31:59.298 "reset": true, 00:31:59.298 "nvme_admin": true, 00:31:59.298 "nvme_io": true, 00:31:59.298 "nvme_io_md": false, 00:31:59.298 "write_zeroes": true, 00:31:59.298 "zcopy": false, 00:31:59.298 "get_zone_info": false, 00:31:59.298 "zone_management": false, 00:31:59.298 "zone_append": false, 00:31:59.298 "compare": true, 00:31:59.298 "compare_and_write": true, 00:31:59.298 "abort": true, 00:31:59.298 "seek_hole": false, 00:31:59.298 "seek_data": false, 00:31:59.298 "copy": true, 00:31:59.298 "nvme_iov_md": false 00:31:59.298 }, 00:31:59.298 "memory_domains": [ 00:31:59.298 { 00:31:59.298 "dma_device_id": "system", 00:31:59.298 "dma_device_type": 1 00:31:59.298 } 00:31:59.298 ], 00:31:59.298 "driver_specific": { 00:31:59.298 "nvme": [ 00:31:59.298 { 00:31:59.298 "trid": { 00:31:59.298 "trtype": "TCP", 00:31:59.298 "adrfam": "IPv4", 00:31:59.298 "traddr": "10.0.0.2", 00:31:59.298 "trsvcid": "4420", 00:31:59.298 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:59.298 }, 00:31:59.298 "ctrlr_data": { 00:31:59.298 "cntlid": 1, 00:31:59.298 "vendor_id": "0x8086", 00:31:59.298 "model_number": "SPDK bdev Controller", 00:31:59.298 "serial_number": "SPDK0", 00:31:59.298 "firmware_revision": "25.01", 00:31:59.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:59.298 "oacs": { 00:31:59.298 "security": 0, 00:31:59.298 "format": 0, 00:31:59.298 "firmware": 0, 00:31:59.298 "ns_manage": 0 00:31:59.298 }, 00:31:59.298 "multi_ctrlr": true, 00:31:59.298 "ana_reporting": false 00:31:59.298 }, 00:31:59.298 "vs": { 00:31:59.298 "nvme_version": "1.3" 00:31:59.298 }, 00:31:59.298 "ns_data": { 00:31:59.298 "id": 1, 00:31:59.298 "can_share": true 00:31:59.298 } 00:31:59.298 } 00:31:59.298 ], 00:31:59.298 "mp_policy": "active_passive" 00:31:59.298 } 00:31:59.298 } 00:31:59.298 ] 00:31:59.298 13:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2381346 00:31:59.298 13:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:59.298 13:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:59.298 Running I/O for 10 seconds... 00:32:00.694 Latency(us) 00:32:00.694 [2024-12-06T12:39:47.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:00.694 Nvme0n1 : 1.00 16555.00 64.67 0.00 0.00 0.00 0.00 0.00 00:32:00.694 [2024-12-06T12:39:47.353Z] =================================================================================================================== 00:32:00.694 [2024-12-06T12:39:47.353Z] Total : 16555.00 64.67 0.00 0.00 0.00 0.00 0.00 00:32:00.694 00:32:01.264 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b37b8017-48d9-439c-8a0e-756babb2d375 00:32:01.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:01.524 Nvme0n1 : 2.00 16789.50 65.58 0.00 0.00 0.00 0.00 0.00 00:32:01.524 [2024-12-06T12:39:48.183Z] =================================================================================================================== 00:32:01.524 [2024-12-06T12:39:48.183Z] Total : 16789.50 65.58 0.00 0.00 0.00 0.00 0.00 00:32:01.524 00:32:01.524 true 00:32:01.524 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37b8017-48d9-439c-8a0e-756babb2d375 00:32:01.524 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:01.807 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:01.807 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:01.807 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2381346 00:32:02.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:02.537 Nvme0n1 : 3.00 17033.00 66.54 0.00 0.00 0.00 0.00 0.00 00:32:02.537 [2024-12-06T12:39:49.196Z] =================================================================================================================== 00:32:02.537 [2024-12-06T12:39:49.196Z] Total : 17033.00 66.54 0.00 0.00 0.00 0.00 0.00 00:32:02.537 00:32:03.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:03.481 Nvme0n1 : 4.00 17206.75 67.21 0.00 0.00 0.00 0.00 0.00 00:32:03.481 [2024-12-06T12:39:50.140Z] =================================================================================================================== 00:32:03.481 [2024-12-06T12:39:50.140Z] Total : 17206.75 67.21 0.00 0.00 0.00 0.00 0.00 00:32:03.481 00:32:04.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:04.424 Nvme0n1 : 5.00 18261.40 71.33 0.00 0.00 0.00 0.00 0.00 00:32:04.424 [2024-12-06T12:39:51.083Z] =================================================================================================================== 00:32:04.424 [2024-12-06T12:39:51.083Z] Total : 18261.40 71.33 0.00 0.00 0.00 0.00 0.00 00:32:04.424 00:32:05.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.362 Nvme0n1 : 6.00 19383.17 75.72 0.00 0.00 0.00 0.00 0.00 00:32:05.362 [2024-12-06T12:39:52.021Z] =================================================================================================================== 00:32:05.362 [2024-12-06T12:39:52.021Z] Total : 19383.17 75.72 0.00 0.00 0.00 0.00 0.00 00:32:05.362 00:32:06.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.742 Nvme0n1 : 7.00 20193.57 78.88 0.00 0.00 0.00 0.00 0.00 00:32:06.742 [2024-12-06T12:39:53.401Z] =================================================================================================================== 00:32:06.742 [2024-12-06T12:39:53.401Z] Total : 20193.57 78.88 0.00 0.00 0.00 0.00 0.00 00:32:06.742 00:32:07.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:07.681 Nvme0n1 : 8.00 20799.38 81.25 0.00 0.00 0.00 0.00 0.00 00:32:07.681 [2024-12-06T12:39:54.340Z] =================================================================================================================== 00:32:07.681 [2024-12-06T12:39:54.340Z] Total : 20799.38 81.25 0.00 0.00 0.00 0.00 0.00 00:32:07.681 00:32:08.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.621 Nvme0n1 : 9.00 21275.89 83.11 0.00 0.00 0.00 0.00 0.00 00:32:08.621 [2024-12-06T12:39:55.281Z] =================================================================================================================== 00:32:08.622 [2024-12-06T12:39:55.281Z] Total : 21275.89 83.11 0.00 0.00 0.00 0.00 0.00 00:32:08.622 00:32:09.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:09.562 Nvme0n1 : 10.00 21657.10 84.60 0.00 0.00 0.00 0.00 0.00 00:32:09.562 [2024-12-06T12:39:56.221Z] =================================================================================================================== 00:32:09.562 [2024-12-06T12:39:56.221Z] Total : 21657.10 84.60 0.00 0.00 0.00 0.00 0.00 00:32:09.562 00:32:09.562 00:32:09.562 Latency(us) 00:32:09.562 [2024-12-06T12:39:56.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:09.563 Nvme0n1 : 10.00 21659.78 84.61 0.00 0.00 5905.71 4041.39 22937.60 00:32:09.563 [2024-12-06T12:39:56.222Z] =================================================================================================================== 00:32:09.563 [2024-12-06T12:39:56.222Z] Total : 21659.78 84.61 0.00 0.00 5905.71 4041.39 22937.60 00:32:09.563 { 00:32:09.563 "results": [ 00:32:09.563 { 00:32:09.563 "job": "Nvme0n1", 00:32:09.563 "core_mask": "0x2", 00:32:09.563 "workload": "randwrite", 00:32:09.563 "status": "finished", 00:32:09.563 "queue_depth": 128, 00:32:09.563 "io_size": 4096, 00:32:09.563 "runtime": 10.004672, 00:32:09.563 "iops": 21659.780550526793, 00:32:09.563 "mibps": 84.60851777549529, 00:32:09.563 "io_failed": 0, 00:32:09.563 "io_timeout": 0, 00:32:09.563 "avg_latency_us": 5905.714210648564, 00:32:09.563 "min_latency_us": 4041.3866666666668, 00:32:09.563 "max_latency_us": 22937.6 00:32:09.563 } 00:32:09.563 ], 00:32:09.563 "core_count": 1 00:32:09.563 } 00:32:09.563 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2381124 00:32:09.563 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2381124 ']' 00:32:09.563 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2381124 00:32:09.563 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:09.563 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:09.563 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2381124 00:32:09.563 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:09.563 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:09.563 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2381124' 00:32:09.563 killing process with pid 2381124 00:32:09.563 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2381124 00:32:09.563 Received shutdown signal, test time was about 10.000000 seconds 00:32:09.563 00:32:09.563 Latency(us) 00:32:09.563 [2024-12-06T12:39:56.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.563 [2024-12-06T12:39:56.222Z] =================================================================================================================== 00:32:09.563 [2024-12-06T12:39:56.222Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:09.563 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2381124 00:32:09.563 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:09.822 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:10.083 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37b8017-48d9-439c-8a0e-756babb2d375 00:32:10.083 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:10.083 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:10.083 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:10.083 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:10.344 [2024-12-06 13:39:56.840371] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:10.344 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37b8017-48d9-439c-8a0e-756babb2d375 00:32:10.344 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:10.344 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37b8017-48d9-439c-8a0e-756babb2d375 00:32:10.344 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:10.344 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:10.344 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:10.344 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:10.344 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:10.345 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:10.345 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:10.345 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:10.345 13:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37b8017-48d9-439c-8a0e-756babb2d375 00:32:10.606 request: 00:32:10.606 { 00:32:10.606 "uuid": "b37b8017-48d9-439c-8a0e-756babb2d375", 00:32:10.606 "method": "bdev_lvol_get_lvstores", 00:32:10.606 "req_id": 1 00:32:10.606 } 00:32:10.606 Got JSON-RPC error response 00:32:10.606 response: 00:32:10.606 { 00:32:10.606 "code": -19, 00:32:10.606 "message": "No such device" 00:32:10.606 } 00:32:10.606 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:10.606 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:10.606 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:10.606 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:10.606 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:10.606 aio_bdev 00:32:10.606 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6414e842-098f-4deb-b4bb-24a2640b7dae 00:32:10.606 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6414e842-098f-4deb-b4bb-24a2640b7dae 00:32:10.606 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:10.606 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:10.606 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:10.606 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:10.606 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:10.867 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6414e842-098f-4deb-b4bb-24a2640b7dae -t 2000 00:32:11.128 [ 00:32:11.128 { 00:32:11.128 "name": "6414e842-098f-4deb-b4bb-24a2640b7dae", 00:32:11.128 "aliases": [ 00:32:11.128 "lvs/lvol" 00:32:11.128 ], 00:32:11.128 "product_name": "Logical Volume", 00:32:11.128 "block_size": 4096, 00:32:11.128 "num_blocks": 38912, 00:32:11.128 "uuid": "6414e842-098f-4deb-b4bb-24a2640b7dae", 00:32:11.128 "assigned_rate_limits": { 00:32:11.128 "rw_ios_per_sec": 0, 00:32:11.128 "rw_mbytes_per_sec": 0, 00:32:11.128 "r_mbytes_per_sec": 0, 00:32:11.128 "w_mbytes_per_sec": 0 00:32:11.128 }, 00:32:11.128 "claimed": false, 00:32:11.128 "zoned": false, 00:32:11.128 "supported_io_types": { 00:32:11.128 "read": true, 00:32:11.128 "write": true, 00:32:11.129 "unmap": true, 00:32:11.129 "flush": false, 00:32:11.129 "reset": true, 00:32:11.129 "nvme_admin": false, 00:32:11.129 "nvme_io": false, 00:32:11.129 "nvme_io_md": false, 00:32:11.129 "write_zeroes": true, 00:32:11.129 "zcopy": false, 00:32:11.129 "get_zone_info": false, 00:32:11.129 "zone_management": false, 00:32:11.129 "zone_append": false, 00:32:11.129 "compare": false, 00:32:11.129 "compare_and_write": false, 00:32:11.129 "abort": false, 00:32:11.129 "seek_hole": true, 00:32:11.129 "seek_data": true, 00:32:11.129 "copy": false, 00:32:11.129 "nvme_iov_md": false 00:32:11.129 }, 00:32:11.129 "driver_specific": { 00:32:11.129 "lvol": { 00:32:11.129 "lvol_store_uuid": "b37b8017-48d9-439c-8a0e-756babb2d375", 00:32:11.129 "base_bdev": "aio_bdev", 00:32:11.129 "thin_provision": false, 00:32:11.129 "num_allocated_clusters": 38, 00:32:11.129 "snapshot": false, 00:32:11.129 "clone": false, 00:32:11.129 "esnap_clone": false 00:32:11.129 } 00:32:11.129 } 00:32:11.129 } 00:32:11.129 ] 00:32:11.129 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:11.129 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37b8017-48d9-439c-8a0e-756babb2d375 00:32:11.129 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:11.129 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:11.129 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b37b8017-48d9-439c-8a0e-756babb2d375 00:32:11.129 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:11.389 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:11.389 13:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6414e842-098f-4deb-b4bb-24a2640b7dae 00:32:11.650 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b37b8017-48d9-439c-8a0e-756babb2d375 00:32:11.650 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:11.911 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:11.911 00:32:11.911 real 0m15.884s 00:32:11.911 user 0m15.380s 00:32:11.911 sys 0m1.571s 00:32:11.911 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.911 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:11.911 ************************************ 00:32:11.911 END TEST lvs_grow_clean 00:32:11.911 ************************************ 00:32:11.911 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:11.911 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:11.911 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.911 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:12.171 ************************************ 00:32:12.171 START TEST lvs_grow_dirty 00:32:12.171 ************************************ 00:32:12.171 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:12.171 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:12.171 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:12.171 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:12.172 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:12.172 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:12.172 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:12.172 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:12.172 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:12.172 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:12.432 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:12.432 13:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:12.432 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:12.432 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:12.432 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:12.692 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:12.692 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:12.692 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c lvol 150 00:32:12.953 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a96c6ea4-72ea-496b-8a55-7ee5194318fc 00:32:12.953 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:12.953 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:12.953 [2024-12-06 13:39:59.512315] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:12.953 [2024-12-06 13:39:59.512497] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:12.953 true 00:32:12.953 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:12.953 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:13.215 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:13.215 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:13.215 13:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a96c6ea4-72ea-496b-8a55-7ee5194318fc 00:32:13.475 13:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:13.736 [2024-12-06 13:40:00.168857] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.736 13:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:13.736 13:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2384129 00:32:13.736 13:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:13.736 13:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:13.736 13:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2384129 /var/tmp/bdevperf.sock 00:32:13.736 13:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2384129 ']' 00:32:13.736 13:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:13.736 13:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:13.736 13:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:13.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:13.736 13:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:13.736 13:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:13.996 [2024-12-06 13:40:00.433783] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:32:13.996 [2024-12-06 13:40:00.433836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384129 ] 00:32:13.996 [2024-12-06 13:40:00.515772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.997 [2024-12-06 13:40:00.545813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:14.566 13:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:14.566 13:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:14.566 13:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:14.826 Nvme0n1 00:32:14.826 13:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:15.086 [ 00:32:15.086 { 00:32:15.086 "name": "Nvme0n1", 00:32:15.086 "aliases": [ 00:32:15.086 "a96c6ea4-72ea-496b-8a55-7ee5194318fc" 00:32:15.086 ], 00:32:15.086 "product_name": "NVMe disk", 00:32:15.086 "block_size": 4096, 00:32:15.086 "num_blocks": 38912, 00:32:15.086 "uuid": "a96c6ea4-72ea-496b-8a55-7ee5194318fc", 00:32:15.086 "numa_id": 0, 00:32:15.086 "assigned_rate_limits": { 00:32:15.086 "rw_ios_per_sec": 0, 00:32:15.086 "rw_mbytes_per_sec": 0, 00:32:15.086 "r_mbytes_per_sec": 0, 00:32:15.086 "w_mbytes_per_sec": 0 00:32:15.086 }, 00:32:15.086 "claimed": false, 00:32:15.086 "zoned": false, 00:32:15.086 "supported_io_types": { 00:32:15.086 "read": true, 00:32:15.086 "write": true, 00:32:15.086 "unmap": true, 00:32:15.086 "flush": true, 00:32:15.086 "reset": true, 00:32:15.086 "nvme_admin": true, 00:32:15.086 "nvme_io": true, 00:32:15.086 "nvme_io_md": false, 00:32:15.086 "write_zeroes": true, 00:32:15.086 "zcopy": false, 00:32:15.086 "get_zone_info": false, 00:32:15.086 "zone_management": false, 00:32:15.086 "zone_append": false, 00:32:15.086 "compare": true, 00:32:15.086 "compare_and_write": true, 00:32:15.086 "abort": true, 00:32:15.086 "seek_hole": false, 00:32:15.086 "seek_data": false, 00:32:15.086 "copy": true, 00:32:15.086 "nvme_iov_md": false 00:32:15.086 }, 00:32:15.086 "memory_domains": [ 00:32:15.086 { 00:32:15.086 "dma_device_id": "system", 00:32:15.086 "dma_device_type": 1 00:32:15.086 } 00:32:15.086 ], 00:32:15.086 "driver_specific": { 00:32:15.086 "nvme": [ 00:32:15.086 { 00:32:15.086 "trid": { 00:32:15.086 "trtype": "TCP", 00:32:15.086 "adrfam": "IPv4", 00:32:15.086 "traddr": "10.0.0.2", 00:32:15.086 "trsvcid": "4420", 00:32:15.086 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:15.086 }, 00:32:15.086 "ctrlr_data": { 00:32:15.086 "cntlid": 1, 00:32:15.086 "vendor_id": "0x8086", 00:32:15.086 "model_number": "SPDK bdev Controller", 00:32:15.086 "serial_number": "SPDK0", 00:32:15.086 "firmware_revision": "25.01", 00:32:15.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:15.086 "oacs": { 00:32:15.086 "security": 0, 00:32:15.086 "format": 0, 00:32:15.086 "firmware": 0, 00:32:15.086 "ns_manage": 0 00:32:15.086 }, 00:32:15.086 "multi_ctrlr": true, 00:32:15.086 "ana_reporting": false 00:32:15.086 }, 00:32:15.086 "vs": { 00:32:15.086 "nvme_version": "1.3" 00:32:15.086 }, 00:32:15.086 "ns_data": { 00:32:15.086 "id": 1, 00:32:15.086 "can_share": true 00:32:15.086 } 00:32:15.086 } 00:32:15.086 ], 00:32:15.086 "mp_policy": "active_passive" 00:32:15.086 } 00:32:15.086 } 00:32:15.086 ] 00:32:15.086 13:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2384420 00:32:15.086 13:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:15.086 13:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:15.086 Running I/O for 10 seconds... 00:32:16.466 Latency(us) 00:32:16.466 [2024-12-06T12:40:03.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:16.466 Nvme0n1 : 1.00 24511.00 95.75 0.00 0.00 0.00 0.00 0.00 00:32:16.466 [2024-12-06T12:40:03.125Z] =================================================================================================================== 00:32:16.466 [2024-12-06T12:40:03.125Z] Total : 24511.00 95.75 0.00 0.00 0.00 0.00 0.00 00:32:16.466 00:32:17.037 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:17.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:17.297 Nvme0n1 : 2.00 25019.00 97.73 0.00 0.00 0.00 0.00 0.00 00:32:17.297 [2024-12-06T12:40:03.956Z] =================================================================================================================== 00:32:17.297 [2024-12-06T12:40:03.956Z] Total : 25019.00 97.73 0.00 0.00 0.00 0.00 0.00 00:32:17.297 00:32:17.297 true 00:32:17.298 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:17.298 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:17.559 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:17.559 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:17.559 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2384420 00:32:18.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:18.131 Nvme0n1 : 3.00 25188.33 98.39 0.00 0.00 0.00 0.00 0.00 00:32:18.131 [2024-12-06T12:40:04.790Z] =================================================================================================================== 00:32:18.131 [2024-12-06T12:40:04.790Z] Total : 25188.33 98.39 0.00 0.00 0.00 0.00 0.00 00:32:18.131 00:32:19.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:19.070 Nvme0n1 : 4.00 25273.00 98.72 0.00 0.00 0.00 0.00 0.00 00:32:19.070 [2024-12-06T12:40:05.729Z] =================================================================================================================== 00:32:19.070 [2024-12-06T12:40:05.729Z] Total : 25273.00 98.72 0.00 0.00 0.00 0.00 0.00 00:32:19.070 00:32:20.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:20.454 Nvme0n1 : 5.00 25349.20 99.02 0.00 0.00 0.00 0.00 0.00 00:32:20.454 [2024-12-06T12:40:07.113Z] =================================================================================================================== 00:32:20.454 [2024-12-06T12:40:07.113Z] Total : 25349.20 99.02 0.00 0.00 0.00 0.00 0.00 00:32:20.454 00:32:21.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.396 Nvme0n1 : 6.00 25400.00 99.22 0.00 0.00 0.00 0.00 0.00 00:32:21.396 [2024-12-06T12:40:08.056Z] =================================================================================================================== 00:32:21.397 [2024-12-06T12:40:08.056Z] Total : 25400.00 99.22 0.00 0.00 0.00 0.00 0.00 00:32:21.397 00:32:22.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.342 Nvme0n1 : 7.00 25436.29 99.36 0.00 0.00 0.00 0.00 0.00 00:32:22.342 [2024-12-06T12:40:09.001Z] =================================================================================================================== 00:32:22.342 [2024-12-06T12:40:09.001Z] Total : 25436.29 99.36 0.00 0.00 0.00 0.00 0.00 00:32:22.342 00:32:23.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.285 Nvme0n1 : 8.00 25463.50 99.47 0.00 0.00 0.00 0.00 0.00 00:32:23.285 [2024-12-06T12:40:09.944Z] =================================================================================================================== 00:32:23.285 [2024-12-06T12:40:09.944Z] Total : 25463.50 99.47 0.00 0.00 0.00 0.00 0.00 00:32:23.285 00:32:24.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.228 Nvme0n1 : 9.00 25481.56 99.54 0.00 0.00 0.00 0.00 0.00 00:32:24.228 [2024-12-06T12:40:10.887Z] =================================================================================================================== 00:32:24.228 [2024-12-06T12:40:10.887Z] Total : 25481.56 99.54 0.00 0.00 0.00 0.00 0.00 00:32:24.228 00:32:25.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.173 Nvme0n1 : 10.00 25498.80 99.60 0.00 0.00 0.00 0.00 0.00 00:32:25.173 [2024-12-06T12:40:11.832Z] =================================================================================================================== 00:32:25.173 [2024-12-06T12:40:11.832Z] Total : 25498.80 99.60 0.00 0.00 0.00 0.00 0.00 00:32:25.173 00:32:25.173 00:32:25.173 Latency(us) 00:32:25.173 [2024-12-06T12:40:11.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.173 Nvme0n1 : 10.00 25503.85 99.62 0.00 0.00 5016.01 2034.35 31020.37 00:32:25.173 [2024-12-06T12:40:11.832Z] =================================================================================================================== 00:32:25.173 [2024-12-06T12:40:11.832Z] Total : 25503.85 99.62 0.00 0.00 5016.01 2034.35 31020.37 00:32:25.173 { 00:32:25.173 "results": [ 00:32:25.173 { 00:32:25.173 "job": "Nvme0n1", 00:32:25.173 "core_mask": "0x2", 00:32:25.173 "workload": "randwrite", 00:32:25.173 "status": "finished", 00:32:25.173 "queue_depth": 128, 00:32:25.173 "io_size": 4096, 00:32:25.173 "runtime": 10.00304, 00:32:25.173 "iops": 25503.846830563507, 00:32:25.173 "mibps": 99.6244016818887, 00:32:25.173 "io_failed": 0, 00:32:25.173 "io_timeout": 0, 00:32:25.173 "avg_latency_us": 5016.006042532286, 00:32:25.173 "min_latency_us": 2034.3466666666666, 00:32:25.173 "max_latency_us": 31020.373333333333 00:32:25.173 } 00:32:25.173 ], 00:32:25.173 "core_count": 1 00:32:25.173 } 00:32:25.173 13:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2384129 00:32:25.173 13:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2384129 ']' 00:32:25.173 13:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2384129 00:32:25.173 13:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:25.173 13:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.173 13:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2384129 00:32:25.173 13:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:25.173 13:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:25.173 13:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2384129' 00:32:25.173 killing process with pid 2384129 00:32:25.173 13:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2384129 00:32:25.173 Received shutdown signal, test time was about 10.000000 seconds 00:32:25.173 00:32:25.173 Latency(us) 00:32:25.173 [2024-12-06T12:40:11.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.173 [2024-12-06T12:40:11.832Z] =================================================================================================================== 00:32:25.173 [2024-12-06T12:40:11.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:25.173 13:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2384129 00:32:25.435 13:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:25.435 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:25.696 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:25.696 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2380622 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2380622 00:32:25.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2380622 Killed "${NVMF_APP[@]}" "$@" 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2386440 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2386440 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2386440 ']' 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.958 13:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:25.958 [2024-12-06 13:40:12.526523] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:25.958 [2024-12-06 13:40:12.528216] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:32:25.958 [2024-12-06 13:40:12.528291] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:26.218 [2024-12-06 13:40:12.622612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.218 [2024-12-06 13:40:12.654909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:26.218 [2024-12-06 13:40:12.654940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:26.218 [2024-12-06 13:40:12.654946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:26.218 [2024-12-06 13:40:12.654951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:26.219 [2024-12-06 13:40:12.654955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:26.219 [2024-12-06 13:40:12.655445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.219 [2024-12-06 13:40:12.707828] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:26.219 [2024-12-06 13:40:12.708010] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:26.792 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:26.792 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:26.792 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:26.792 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:26.792 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:26.792 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:26.792 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:27.052 [2024-12-06 13:40:13.533914] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:27.052 [2024-12-06 13:40:13.534167] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:27.052 [2024-12-06 13:40:13.534260] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:27.052 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:27.052 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a96c6ea4-72ea-496b-8a55-7ee5194318fc 00:32:27.052 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a96c6ea4-72ea-496b-8a55-7ee5194318fc 00:32:27.052 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:27.052 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:27.052 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:27.052 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:27.052 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:27.312 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a96c6ea4-72ea-496b-8a55-7ee5194318fc -t 2000 00:32:27.312 [ 00:32:27.312 { 00:32:27.312 "name": "a96c6ea4-72ea-496b-8a55-7ee5194318fc", 00:32:27.312 "aliases": [ 00:32:27.312 "lvs/lvol" 00:32:27.312 ], 00:32:27.312 "product_name": "Logical Volume", 00:32:27.312 "block_size": 4096, 00:32:27.312 "num_blocks": 38912, 00:32:27.312 "uuid": "a96c6ea4-72ea-496b-8a55-7ee5194318fc", 00:32:27.312 "assigned_rate_limits": { 00:32:27.312 "rw_ios_per_sec": 0, 00:32:27.312 "rw_mbytes_per_sec": 0, 00:32:27.312 "r_mbytes_per_sec": 0, 00:32:27.312 "w_mbytes_per_sec": 0 00:32:27.312 }, 00:32:27.312 "claimed": false, 00:32:27.312 "zoned": false, 00:32:27.312 "supported_io_types": { 00:32:27.312 "read": true, 00:32:27.312 "write": true, 00:32:27.312 "unmap": true, 00:32:27.312 "flush": false, 00:32:27.312 "reset": true, 00:32:27.312 "nvme_admin": false, 00:32:27.312 "nvme_io": false, 00:32:27.312 "nvme_io_md": false, 00:32:27.312 "write_zeroes": true, 00:32:27.312 "zcopy": false, 00:32:27.312 "get_zone_info": false, 00:32:27.312 "zone_management": false, 00:32:27.312 "zone_append": false, 00:32:27.312 "compare": false, 00:32:27.312 "compare_and_write": false, 00:32:27.312 "abort": false, 00:32:27.312 "seek_hole": true, 00:32:27.312 "seek_data": true, 00:32:27.312 "copy": false, 00:32:27.312 "nvme_iov_md": false 00:32:27.312 }, 00:32:27.312 "driver_specific": { 00:32:27.312 "lvol": { 00:32:27.312 "lvol_store_uuid": "47de60f6-d83f-47f1-b1d4-ca526f9a051c", 00:32:27.312 "base_bdev": "aio_bdev", 00:32:27.312 "thin_provision": false, 00:32:27.312 "num_allocated_clusters": 38, 00:32:27.312 "snapshot": false, 00:32:27.312 "clone": false, 00:32:27.312 "esnap_clone": false 00:32:27.312 } 00:32:27.312 } 00:32:27.312 } 00:32:27.312 ] 00:32:27.312 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:27.312 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:27.312 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:27.572 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:27.572 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:27.572 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:27.832 [2024-12-06 13:40:14.407966] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:27.832 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:28.093 request: 00:32:28.093 { 00:32:28.093 "uuid": "47de60f6-d83f-47f1-b1d4-ca526f9a051c", 00:32:28.093 "method": "bdev_lvol_get_lvstores", 00:32:28.093 "req_id": 1 00:32:28.093 } 00:32:28.093 Got JSON-RPC error response 00:32:28.093 response: 00:32:28.093 { 00:32:28.093 "code": -19, 00:32:28.093 "message": "No such device" 00:32:28.093 } 00:32:28.093 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:28.093 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:28.093 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:28.093 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:28.093 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:28.354 aio_bdev 00:32:28.354 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a96c6ea4-72ea-496b-8a55-7ee5194318fc 00:32:28.354 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a96c6ea4-72ea-496b-8a55-7ee5194318fc 00:32:28.354 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:28.354 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:28.354 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:28.354 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:28.354 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:28.354 13:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a96c6ea4-72ea-496b-8a55-7ee5194318fc -t 2000 00:32:28.614 [ 00:32:28.614 { 00:32:28.614 "name": "a96c6ea4-72ea-496b-8a55-7ee5194318fc", 00:32:28.614 "aliases": [ 00:32:28.614 "lvs/lvol" 00:32:28.614 ], 00:32:28.614 "product_name": "Logical Volume", 00:32:28.614 "block_size": 4096, 00:32:28.614 "num_blocks": 38912, 00:32:28.614 "uuid": "a96c6ea4-72ea-496b-8a55-7ee5194318fc", 00:32:28.614 "assigned_rate_limits": { 00:32:28.614 "rw_ios_per_sec": 0, 00:32:28.614 "rw_mbytes_per_sec": 0, 00:32:28.614 "r_mbytes_per_sec": 0, 00:32:28.614 "w_mbytes_per_sec": 0 00:32:28.614 }, 00:32:28.614 "claimed": false, 00:32:28.614 "zoned": false, 00:32:28.614 "supported_io_types": { 00:32:28.614 "read": true, 00:32:28.614 "write": true, 00:32:28.614 "unmap": true, 00:32:28.614 "flush": false, 00:32:28.614 "reset": true, 00:32:28.614 "nvme_admin": false, 00:32:28.614 "nvme_io": false, 00:32:28.614 "nvme_io_md": false, 00:32:28.614 "write_zeroes": true, 00:32:28.614 "zcopy": false, 00:32:28.614 "get_zone_info": false, 00:32:28.614 "zone_management": false, 00:32:28.614 "zone_append": false, 00:32:28.614 "compare": false, 00:32:28.614 "compare_and_write": false, 00:32:28.614 "abort": false, 00:32:28.614 "seek_hole": true, 00:32:28.614 "seek_data": true, 00:32:28.614 "copy": false, 00:32:28.614 "nvme_iov_md": false 00:32:28.614 }, 00:32:28.614 "driver_specific": { 00:32:28.614 "lvol": { 00:32:28.614 "lvol_store_uuid": "47de60f6-d83f-47f1-b1d4-ca526f9a051c", 00:32:28.614 "base_bdev": "aio_bdev", 00:32:28.614 "thin_provision": false, 00:32:28.614 "num_allocated_clusters": 38, 00:32:28.614 "snapshot": false, 00:32:28.614 "clone": false, 00:32:28.614 "esnap_clone": false 00:32:28.614 } 00:32:28.614 } 00:32:28.614 } 00:32:28.614 ] 00:32:28.614 13:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:28.614 13:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:28.614 13:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:28.873 13:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:28.873 13:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:28.873 13:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:29.132 13:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:29.132 13:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a96c6ea4-72ea-496b-8a55-7ee5194318fc 00:32:29.132 13:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47de60f6-d83f-47f1-b1d4-ca526f9a051c 00:32:29.391 13:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:29.651 00:32:29.651 real 0m17.547s 00:32:29.651 user 0m35.296s 00:32:29.651 sys 0m3.228s 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:29.651 ************************************ 00:32:29.651 END TEST lvs_grow_dirty 00:32:29.651 ************************************ 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:29.651 nvmf_trace.0 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:29.651 rmmod nvme_tcp 00:32:29.651 rmmod nvme_fabrics 00:32:29.651 rmmod nvme_keyring 00:32:29.651 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2386440 ']' 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2386440 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2386440 ']' 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2386440 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2386440 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2386440' 00:32:29.911 killing process with pid 2386440 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2386440 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2386440 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:29.911 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:32.455 00:32:32.455 real 0m44.805s 00:32:32.455 user 0m53.605s 00:32:32.455 sys 0m10.991s 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:32.455 ************************************ 00:32:32.455 END TEST nvmf_lvs_grow 00:32:32.455 ************************************ 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:32.455 ************************************ 00:32:32.455 START TEST nvmf_bdev_io_wait 00:32:32.455 ************************************ 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:32.455 * Looking for test storage... 00:32:32.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:32.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.455 --rc genhtml_branch_coverage=1 00:32:32.455 --rc genhtml_function_coverage=1 00:32:32.455 --rc genhtml_legend=1 00:32:32.455 --rc geninfo_all_blocks=1 00:32:32.455 --rc geninfo_unexecuted_blocks=1 00:32:32.455 00:32:32.455 ' 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:32.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.455 --rc genhtml_branch_coverage=1 00:32:32.455 --rc genhtml_function_coverage=1 00:32:32.455 --rc genhtml_legend=1 00:32:32.455 --rc geninfo_all_blocks=1 00:32:32.455 --rc geninfo_unexecuted_blocks=1 00:32:32.455 00:32:32.455 ' 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:32.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.455 --rc genhtml_branch_coverage=1 00:32:32.455 --rc genhtml_function_coverage=1 00:32:32.455 --rc genhtml_legend=1 00:32:32.455 --rc geninfo_all_blocks=1 00:32:32.455 --rc geninfo_unexecuted_blocks=1 00:32:32.455 00:32:32.455 ' 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:32.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.455 --rc genhtml_branch_coverage=1 00:32:32.455 --rc genhtml_function_coverage=1 00:32:32.455 --rc genhtml_legend=1 00:32:32.455 --rc geninfo_all_blocks=1 00:32:32.455 --rc geninfo_unexecuted_blocks=1 00:32:32.455 00:32:32.455 ' 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.455 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:32.456 13:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:40.592 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:40.592 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:40.592 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:40.592 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:40.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:40.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:32:40.592 00:32:40.592 --- 10.0.0.2 ping statistics --- 00:32:40.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.592 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:40.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:40.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:32:40.592 00:32:40.592 --- 10.0.0.1 ping statistics --- 00:32:40.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.592 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2391499 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2391499 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2391499 ']' 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:40.592 13:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.592 [2024-12-06 13:40:26.472122] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:40.592 [2024-12-06 13:40:26.473244] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:32:40.592 [2024-12-06 13:40:26.473290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:40.592 [2024-12-06 13:40:26.572476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:40.592 [2024-12-06 13:40:26.626622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:40.592 [2024-12-06 13:40:26.626676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:40.592 [2024-12-06 13:40:26.626686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:40.592 [2024-12-06 13:40:26.626697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:40.592 [2024-12-06 13:40:26.626703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:40.592 [2024-12-06 13:40:26.628734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.592 [2024-12-06 13:40:26.628966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:40.592 [2024-12-06 13:40:26.629129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:40.592 [2024-12-06 13:40:26.629129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.592 [2024-12-06 13:40:26.629644] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.853 [2024-12-06 13:40:27.405632] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:40.853 [2024-12-06 13:40:27.406803] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:40.853 [2024-12-06 13:40:27.406960] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:40.853 [2024-12-06 13:40:27.407088] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.853 [2024-12-06 13:40:27.418157] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.853 Malloc0 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:40.853 [2024-12-06 13:40:27.490319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2391633 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2391636 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:40.853 { 00:32:40.853 "params": { 00:32:40.853 "name": "Nvme$subsystem", 00:32:40.853 "trtype": "$TEST_TRANSPORT", 00:32:40.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:40.853 "adrfam": "ipv4", 00:32:40.853 "trsvcid": "$NVMF_PORT", 00:32:40.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:40.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:40.853 "hdgst": ${hdgst:-false}, 00:32:40.853 "ddgst": ${ddgst:-false} 00:32:40.853 }, 00:32:40.853 "method": "bdev_nvme_attach_controller" 00:32:40.853 } 00:32:40.853 EOF 00:32:40.853 )") 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2391639 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:40.853 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:40.854 { 00:32:40.854 "params": { 00:32:40.854 "name": "Nvme$subsystem", 00:32:40.854 "trtype": "$TEST_TRANSPORT", 00:32:40.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:40.854 "adrfam": "ipv4", 00:32:40.854 "trsvcid": "$NVMF_PORT", 00:32:40.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:40.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:40.854 "hdgst": ${hdgst:-false}, 00:32:40.854 "ddgst": ${ddgst:-false} 00:32:40.854 }, 00:32:40.854 "method": "bdev_nvme_attach_controller" 00:32:40.854 } 00:32:40.854 EOF 00:32:40.854 )") 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2391642 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:40.854 { 00:32:40.854 "params": { 00:32:40.854 "name": "Nvme$subsystem", 00:32:40.854 "trtype": "$TEST_TRANSPORT", 00:32:40.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:40.854 "adrfam": "ipv4", 00:32:40.854 "trsvcid": "$NVMF_PORT", 00:32:40.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:40.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:40.854 "hdgst": ${hdgst:-false}, 00:32:40.854 "ddgst": ${ddgst:-false} 00:32:40.854 }, 00:32:40.854 "method": "bdev_nvme_attach_controller" 00:32:40.854 } 00:32:40.854 EOF 00:32:40.854 )") 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:40.854 { 00:32:40.854 "params": { 00:32:40.854 "name": "Nvme$subsystem", 00:32:40.854 "trtype": "$TEST_TRANSPORT", 00:32:40.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:40.854 "adrfam": "ipv4", 00:32:40.854 "trsvcid": "$NVMF_PORT", 00:32:40.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:40.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:40.854 "hdgst": ${hdgst:-false}, 00:32:40.854 "ddgst": ${ddgst:-false} 00:32:40.854 }, 00:32:40.854 "method": "bdev_nvme_attach_controller" 00:32:40.854 } 00:32:40.854 EOF 00:32:40.854 )") 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2391633 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:40.854 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:41.115 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:41.115 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:41.115 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:41.115 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:41.115 "params": { 00:32:41.115 "name": "Nvme1", 00:32:41.115 "trtype": "tcp", 00:32:41.115 "traddr": "10.0.0.2", 00:32:41.115 "adrfam": "ipv4", 00:32:41.115 "trsvcid": "4420", 00:32:41.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.115 "hdgst": false, 00:32:41.115 "ddgst": false 00:32:41.115 }, 00:32:41.115 "method": "bdev_nvme_attach_controller" 00:32:41.115 }' 00:32:41.115 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:41.115 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:41.115 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:41.115 "params": { 00:32:41.115 "name": "Nvme1", 00:32:41.115 "trtype": "tcp", 00:32:41.115 "traddr": "10.0.0.2", 00:32:41.115 "adrfam": "ipv4", 00:32:41.115 "trsvcid": "4420", 00:32:41.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.115 "hdgst": false, 00:32:41.115 "ddgst": false 00:32:41.115 }, 00:32:41.115 "method": "bdev_nvme_attach_controller" 00:32:41.115 }' 00:32:41.115 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:41.115 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:41.115 "params": { 00:32:41.115 "name": "Nvme1", 00:32:41.115 "trtype": "tcp", 00:32:41.115 "traddr": "10.0.0.2", 00:32:41.115 "adrfam": "ipv4", 00:32:41.115 "trsvcid": "4420", 00:32:41.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.115 "hdgst": false, 00:32:41.115 "ddgst": false 00:32:41.115 }, 00:32:41.115 "method": "bdev_nvme_attach_controller" 00:32:41.115 }' 00:32:41.115 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:41.115 13:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:41.115 "params": { 00:32:41.115 "name": "Nvme1", 00:32:41.115 "trtype": "tcp", 00:32:41.115 "traddr": "10.0.0.2", 00:32:41.115 "adrfam": "ipv4", 00:32:41.115 "trsvcid": "4420", 00:32:41.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.115 "hdgst": false, 00:32:41.115 "ddgst": false 00:32:41.115 }, 00:32:41.115 "method": "bdev_nvme_attach_controller" 00:32:41.115 }' 00:32:41.115 [2024-12-06 13:40:27.550160] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:32:41.115 [2024-12-06 13:40:27.550222] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:41.115 [2024-12-06 13:40:27.551544] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:32:41.115 [2024-12-06 13:40:27.551574] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:32:41.115 [2024-12-06 13:40:27.551607] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:41.115 [2024-12-06 13:40:27.551651] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:41.115 [2024-12-06 13:40:27.555202] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:32:41.115 [2024-12-06 13:40:27.555292] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:41.115 [2024-12-06 13:40:27.761996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.375 [2024-12-06 13:40:27.802169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:41.375 [2024-12-06 13:40:27.853013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.375 [2024-12-06 13:40:27.892332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:41.375 [2024-12-06 13:40:27.945430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.375 [2024-12-06 13:40:27.989203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:41.375 [2024-12-06 13:40:28.011085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.636 [2024-12-06 13:40:28.048507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:41.636 Running I/O for 1 seconds... 00:32:41.636 Running I/O for 1 seconds... 00:32:41.636 Running I/O for 1 seconds... 00:32:41.896 Running I/O for 1 seconds... 00:32:42.838 12529.00 IOPS, 48.94 MiB/s 00:32:42.838 Latency(us) 00:32:42.838 [2024-12-06T12:40:29.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.838 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:42.838 Nvme1n1 : 1.01 12575.51 49.12 0.00 0.00 10142.81 5079.04 12724.91 00:32:42.838 [2024-12-06T12:40:29.497Z] =================================================================================================================== 00:32:42.838 [2024-12-06T12:40:29.497Z] Total : 12575.51 49.12 0.00 0.00 10142.81 5079.04 12724.91 00:32:42.838 6774.00 IOPS, 26.46 MiB/s 00:32:42.838 Latency(us) 00:32:42.838 [2024-12-06T12:40:29.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.838 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:42.838 Nvme1n1 : 1.02 6827.73 26.67 0.00 0.00 18633.17 2471.25 30801.92 00:32:42.838 [2024-12-06T12:40:29.497Z] =================================================================================================================== 00:32:42.838 [2024-12-06T12:40:29.497Z] Total : 6827.73 26.67 0.00 0.00 18633.17 2471.25 30801.92 00:32:42.838 180240.00 IOPS, 704.06 MiB/s 00:32:42.838 Latency(us) 00:32:42.838 [2024-12-06T12:40:29.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.838 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:42.838 Nvme1n1 : 1.00 179883.02 702.67 0.00 0.00 707.62 300.37 1966.08 00:32:42.838 [2024-12-06T12:40:29.497Z] =================================================================================================================== 00:32:42.838 [2024-12-06T12:40:29.497Z] Total : 179883.02 702.67 0.00 0.00 707.62 300.37 1966.08 00:32:42.838 7325.00 IOPS, 28.61 MiB/s 00:32:42.838 Latency(us) 00:32:42.838 [2024-12-06T12:40:29.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.838 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:42.838 Nvme1n1 : 1.01 7448.40 29.10 0.00 0.00 17138.41 3986.77 37792.43 00:32:42.838 [2024-12-06T12:40:29.497Z] =================================================================================================================== 00:32:42.838 [2024-12-06T12:40:29.497Z] Total : 7448.40 29.10 0.00 0.00 17138.41 3986.77 37792.43 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2391636 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2391639 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2391642 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.838 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.838 rmmod nvme_tcp 00:32:42.838 rmmod nvme_fabrics 00:32:42.838 rmmod nvme_keyring 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2391499 ']' 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2391499 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2391499 ']' 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2391499 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2391499 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2391499' 00:32:43.099 killing process with pid 2391499 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2391499 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2391499 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:43.099 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:43.100 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:43.100 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:43.100 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:43.100 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:43.100 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:43.100 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.100 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.100 13:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.649 13:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:45.649 00:32:45.649 real 0m13.145s 00:32:45.649 user 0m16.223s 00:32:45.649 sys 0m7.651s 00:32:45.649 13:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.649 13:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.649 ************************************ 00:32:45.649 END TEST nvmf_bdev_io_wait 00:32:45.649 ************************************ 00:32:45.649 13:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:45.649 13:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:45.649 13:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.649 13:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:45.649 ************************************ 00:32:45.649 START TEST nvmf_queue_depth 00:32:45.649 ************************************ 00:32:45.649 13:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:45.649 * Looking for test storage... 00:32:45.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:45.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.649 --rc genhtml_branch_coverage=1 00:32:45.649 --rc genhtml_function_coverage=1 00:32:45.649 --rc genhtml_legend=1 00:32:45.649 --rc geninfo_all_blocks=1 00:32:45.649 --rc geninfo_unexecuted_blocks=1 00:32:45.649 00:32:45.649 ' 00:32:45.649 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:45.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.649 --rc genhtml_branch_coverage=1 00:32:45.649 --rc genhtml_function_coverage=1 00:32:45.650 --rc genhtml_legend=1 00:32:45.650 --rc geninfo_all_blocks=1 00:32:45.650 --rc geninfo_unexecuted_blocks=1 00:32:45.650 00:32:45.650 ' 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:45.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.650 --rc genhtml_branch_coverage=1 00:32:45.650 --rc genhtml_function_coverage=1 00:32:45.650 --rc genhtml_legend=1 00:32:45.650 --rc geninfo_all_blocks=1 00:32:45.650 --rc geninfo_unexecuted_blocks=1 00:32:45.650 00:32:45.650 ' 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:45.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.650 --rc genhtml_branch_coverage=1 00:32:45.650 --rc genhtml_function_coverage=1 00:32:45.650 --rc genhtml_legend=1 00:32:45.650 --rc geninfo_all_blocks=1 00:32:45.650 --rc geninfo_unexecuted_blocks=1 00:32:45.650 00:32:45.650 ' 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:45.650 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:53.792 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:53.793 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:53.793 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:53.793 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:53.793 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:53.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:53.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:32:53.793 00:32:53.793 --- 10.0.0.2 ping statistics --- 00:32:53.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.793 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:53.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:53.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:32:53.793 00:32:53.793 --- 10.0.0.1 ping statistics --- 00:32:53.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.793 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2396224 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2396224 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2396224 ']' 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.793 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.794 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.794 13:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:53.794 [2024-12-06 13:40:39.686491] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:53.794 [2024-12-06 13:40:39.687613] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:32:53.794 [2024-12-06 13:40:39.687661] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.794 [2024-12-06 13:40:39.789823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.794 [2024-12-06 13:40:39.839920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.794 [2024-12-06 13:40:39.839966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.794 [2024-12-06 13:40:39.839975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.794 [2024-12-06 13:40:39.839982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.794 [2024-12-06 13:40:39.839994] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.794 [2024-12-06 13:40:39.840726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.794 [2024-12-06 13:40:39.918053] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:53.794 [2024-12-06 13:40:39.918347] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:54.055 [2024-12-06 13:40:40.541584] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:54.055 Malloc0 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:54.055 [2024-12-06 13:40:40.625765] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2396461 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2396461 /var/tmp/bdevperf.sock 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2396461 ']' 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:54.055 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:54.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:54.056 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:54.056 13:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:54.056 [2024-12-06 13:40:40.683125] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:32:54.056 [2024-12-06 13:40:40.683189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396461 ] 00:32:54.316 [2024-12-06 13:40:40.774809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.316 [2024-12-06 13:40:40.827175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.888 13:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:54.888 13:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:54.888 13:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:54.888 13:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.888 13:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:55.148 NVMe0n1 00:32:55.148 13:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.148 13:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:55.148 Running I/O for 10 seconds... 00:32:57.146 8192.00 IOPS, 32.00 MiB/s [2024-12-06T12:40:44.747Z] 8690.00 IOPS, 33.95 MiB/s [2024-12-06T12:40:46.129Z] 9199.67 IOPS, 35.94 MiB/s [2024-12-06T12:40:47.068Z] 9999.00 IOPS, 39.06 MiB/s [2024-12-06T12:40:48.008Z] 10714.20 IOPS, 41.85 MiB/s [2024-12-06T12:40:48.949Z] 11194.33 IOPS, 43.73 MiB/s [2024-12-06T12:40:49.891Z] 11556.14 IOPS, 45.14 MiB/s [2024-12-06T12:40:50.834Z] 11809.88 IOPS, 46.13 MiB/s [2024-12-06T12:40:51.778Z] 12033.22 IOPS, 47.00 MiB/s [2024-12-06T12:40:52.039Z] 12203.10 IOPS, 47.67 MiB/s 00:33:05.380 Latency(us) 00:33:05.380 [2024-12-06T12:40:52.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.380 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:05.380 Verification LBA range: start 0x0 length 0x4000 00:33:05.380 NVMe0n1 : 10.05 12239.74 47.81 0.00 0.00 83383.00 13762.56 71215.79 00:33:05.380 [2024-12-06T12:40:52.039Z] =================================================================================================================== 00:33:05.380 [2024-12-06T12:40:52.039Z] Total : 12239.74 47.81 0.00 0.00 83383.00 13762.56 71215.79 00:33:05.380 { 00:33:05.380 "results": [ 00:33:05.380 { 00:33:05.380 "job": "NVMe0n1", 00:33:05.380 "core_mask": "0x1", 00:33:05.380 "workload": "verify", 00:33:05.380 "status": "finished", 00:33:05.380 "verify_range": { 00:33:05.380 "start": 0, 00:33:05.380 "length": 16384 00:33:05.380 }, 00:33:05.380 "queue_depth": 1024, 00:33:05.380 "io_size": 4096, 00:33:05.380 "runtime": 10.048251, 00:33:05.380 "iops": 12239.742020775557, 00:33:05.380 "mibps": 47.81149226865452, 00:33:05.380 "io_failed": 0, 00:33:05.380 "io_timeout": 0, 00:33:05.380 "avg_latency_us": 83382.9977177177, 00:33:05.380 "min_latency_us": 13762.56, 00:33:05.380 "max_latency_us": 71215.78666666667 00:33:05.380 } 00:33:05.380 ], 00:33:05.380 "core_count": 1 00:33:05.380 } 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2396461 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2396461 ']' 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2396461 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2396461 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2396461' 00:33:05.380 killing process with pid 2396461 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2396461 00:33:05.380 Received shutdown signal, test time was about 10.000000 seconds 00:33:05.380 00:33:05.380 Latency(us) 00:33:05.380 [2024-12-06T12:40:52.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.380 [2024-12-06T12:40:52.039Z] =================================================================================================================== 00:33:05.380 [2024-12-06T12:40:52.039Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2396461 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.380 13:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.380 rmmod nvme_tcp 00:33:05.380 rmmod nvme_fabrics 00:33:05.380 rmmod nvme_keyring 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2396224 ']' 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2396224 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2396224 ']' 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2396224 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2396224 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2396224' 00:33:05.641 killing process with pid 2396224 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2396224 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2396224 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.641 13:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:08.187 00:33:08.187 real 0m22.413s 00:33:08.187 user 0m24.552s 00:33:08.187 sys 0m7.490s 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:08.187 ************************************ 00:33:08.187 END TEST nvmf_queue_depth 00:33:08.187 ************************************ 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:08.187 ************************************ 00:33:08.187 START TEST nvmf_target_multipath 00:33:08.187 ************************************ 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:08.187 * Looking for test storage... 00:33:08.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:08.187 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:08.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.187 --rc genhtml_branch_coverage=1 00:33:08.187 --rc genhtml_function_coverage=1 00:33:08.187 --rc genhtml_legend=1 00:33:08.187 --rc geninfo_all_blocks=1 00:33:08.188 --rc geninfo_unexecuted_blocks=1 00:33:08.188 00:33:08.188 ' 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:08.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.188 --rc genhtml_branch_coverage=1 00:33:08.188 --rc genhtml_function_coverage=1 00:33:08.188 --rc genhtml_legend=1 00:33:08.188 --rc geninfo_all_blocks=1 00:33:08.188 --rc geninfo_unexecuted_blocks=1 00:33:08.188 00:33:08.188 ' 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:08.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.188 --rc genhtml_branch_coverage=1 00:33:08.188 --rc genhtml_function_coverage=1 00:33:08.188 --rc genhtml_legend=1 00:33:08.188 --rc geninfo_all_blocks=1 00:33:08.188 --rc geninfo_unexecuted_blocks=1 00:33:08.188 00:33:08.188 ' 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:08.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.188 --rc genhtml_branch_coverage=1 00:33:08.188 --rc genhtml_function_coverage=1 00:33:08.188 --rc genhtml_legend=1 00:33:08.188 --rc geninfo_all_blocks=1 00:33:08.188 --rc geninfo_unexecuted_blocks=1 00:33:08.188 00:33:08.188 ' 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:08.188 13:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:16.331 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:16.332 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:16.332 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:16.332 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:16.332 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.332 13:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:16.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:33:16.332 00:33:16.332 --- 10.0.0.2 ping statistics --- 00:33:16.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.332 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:33:16.332 00:33:16.332 --- 10.0.0.1 ping statistics --- 00:33:16.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.332 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:16.332 only one NIC for nvmf test 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:16.332 rmmod nvme_tcp 00:33:16.332 rmmod nvme_fabrics 00:33:16.332 rmmod nvme_keyring 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:16.332 13:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.713 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:17.713 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:17.713 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:17.713 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:17.713 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.974 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:17.974 00:33:17.974 real 0m10.016s 00:33:17.974 user 0m2.222s 00:33:17.974 sys 0m5.713s 00:33:17.975 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.975 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:17.975 ************************************ 00:33:17.975 END TEST nvmf_target_multipath 00:33:17.975 ************************************ 00:33:17.975 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:17.975 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:17.975 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.975 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:17.975 ************************************ 00:33:17.975 START TEST nvmf_zcopy 00:33:17.975 ************************************ 00:33:17.975 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:17.975 * Looking for test storage... 00:33:17.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:17.975 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:17.975 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:33:17.975 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:18.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.236 --rc genhtml_branch_coverage=1 00:33:18.236 --rc genhtml_function_coverage=1 00:33:18.236 --rc genhtml_legend=1 00:33:18.236 --rc geninfo_all_blocks=1 00:33:18.236 --rc geninfo_unexecuted_blocks=1 00:33:18.236 00:33:18.236 ' 00:33:18.236 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:18.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.237 --rc genhtml_branch_coverage=1 00:33:18.237 --rc genhtml_function_coverage=1 00:33:18.237 --rc genhtml_legend=1 00:33:18.237 --rc geninfo_all_blocks=1 00:33:18.237 --rc geninfo_unexecuted_blocks=1 00:33:18.237 00:33:18.237 ' 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:18.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.237 --rc genhtml_branch_coverage=1 00:33:18.237 --rc genhtml_function_coverage=1 00:33:18.237 --rc genhtml_legend=1 00:33:18.237 --rc geninfo_all_blocks=1 00:33:18.237 --rc geninfo_unexecuted_blocks=1 00:33:18.237 00:33:18.237 ' 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:18.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.237 --rc genhtml_branch_coverage=1 00:33:18.237 --rc genhtml_function_coverage=1 00:33:18.237 --rc genhtml_legend=1 00:33:18.237 --rc geninfo_all_blocks=1 00:33:18.237 --rc geninfo_unexecuted_blocks=1 00:33:18.237 00:33:18.237 ' 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:18.237 13:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:26.376 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:26.376 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:26.376 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:26.376 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:26.376 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:26.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:26.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:33:26.377 00:33:26.377 --- 10.0.0.2 ping statistics --- 00:33:26.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.377 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:26.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:26.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:33:26.377 00:33:26.377 --- 10.0.0.1 ping statistics --- 00:33:26.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.377 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2406910 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2406910 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2406910 ']' 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.377 13:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.377 [2024-12-06 13:41:12.042703] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:26.377 [2024-12-06 13:41:12.043864] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:33:26.377 [2024-12-06 13:41:12.043918] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.377 [2024-12-06 13:41:12.145974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.377 [2024-12-06 13:41:12.196300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.377 [2024-12-06 13:41:12.196354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.377 [2024-12-06 13:41:12.196362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.377 [2024-12-06 13:41:12.196369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.377 [2024-12-06 13:41:12.196376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.377 [2024-12-06 13:41:12.197083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.377 [2024-12-06 13:41:12.274325] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:26.377 [2024-12-06 13:41:12.274617] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.377 [2024-12-06 13:41:12.921976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.377 [2024-12-06 13:41:12.950273] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.377 malloc0 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:26.377 13:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:26.377 13:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:26.377 13:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:26.377 13:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:26.377 13:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:26.377 { 00:33:26.377 "params": { 00:33:26.377 "name": "Nvme$subsystem", 00:33:26.377 "trtype": "$TEST_TRANSPORT", 00:33:26.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:26.377 "adrfam": "ipv4", 00:33:26.377 "trsvcid": "$NVMF_PORT", 00:33:26.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:26.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:26.377 "hdgst": ${hdgst:-false}, 00:33:26.377 "ddgst": ${ddgst:-false} 00:33:26.377 }, 00:33:26.377 "method": "bdev_nvme_attach_controller" 00:33:26.377 } 00:33:26.377 EOF 00:33:26.377 )") 00:33:26.377 13:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:26.378 13:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:26.378 13:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:26.378 13:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:26.378 "params": { 00:33:26.378 "name": "Nvme1", 00:33:26.378 "trtype": "tcp", 00:33:26.378 "traddr": "10.0.0.2", 00:33:26.378 "adrfam": "ipv4", 00:33:26.378 "trsvcid": "4420", 00:33:26.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:26.378 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:26.378 "hdgst": false, 00:33:26.378 "ddgst": false 00:33:26.378 }, 00:33:26.378 "method": "bdev_nvme_attach_controller" 00:33:26.378 }' 00:33:26.639 [2024-12-06 13:41:13.053514] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:33:26.639 [2024-12-06 13:41:13.053581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2407040 ] 00:33:26.639 [2024-12-06 13:41:13.144254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.639 [2024-12-06 13:41:13.197443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.900 Running I/O for 10 seconds... 00:33:29.245 6505.00 IOPS, 50.82 MiB/s [2024-12-06T12:41:16.845Z] 6520.00 IOPS, 50.94 MiB/s [2024-12-06T12:41:17.785Z] 6597.33 IOPS, 51.54 MiB/s [2024-12-06T12:41:18.723Z] 6753.75 IOPS, 52.76 MiB/s [2024-12-06T12:41:19.662Z] 7346.60 IOPS, 57.40 MiB/s [2024-12-06T12:41:20.602Z] 7740.67 IOPS, 60.47 MiB/s [2024-12-06T12:41:21.544Z] 8015.00 IOPS, 62.62 MiB/s [2024-12-06T12:41:22.960Z] 8228.25 IOPS, 64.28 MiB/s [2024-12-06T12:41:23.532Z] 8392.33 IOPS, 65.57 MiB/s [2024-12-06T12:41:23.532Z] 8525.70 IOPS, 66.61 MiB/s 00:33:36.873 Latency(us) 00:33:36.873 [2024-12-06T12:41:23.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.873 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:36.873 Verification LBA range: start 0x0 length 0x1000 00:33:36.873 Nvme1n1 : 10.01 8529.25 66.63 0.00 0.00 14962.09 2348.37 29272.75 00:33:36.873 [2024-12-06T12:41:23.532Z] =================================================================================================================== 00:33:36.873 [2024-12-06T12:41:23.532Z] Total : 8529.25 66.63 0.00 0.00 14962.09 2348.37 29272.75 00:33:37.134 13:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2408970 00:33:37.134 13:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:37.134 13:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:37.134 13:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:37.134 13:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:37.135 13:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:37.135 13:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:37.135 13:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:37.135 13:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:37.135 { 00:33:37.135 "params": { 00:33:37.135 "name": "Nvme$subsystem", 00:33:37.135 "trtype": "$TEST_TRANSPORT", 00:33:37.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.135 "adrfam": "ipv4", 00:33:37.135 "trsvcid": "$NVMF_PORT", 00:33:37.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.135 "hdgst": ${hdgst:-false}, 00:33:37.135 "ddgst": ${ddgst:-false} 00:33:37.135 }, 00:33:37.135 "method": "bdev_nvme_attach_controller" 00:33:37.135 } 00:33:37.135 EOF 00:33:37.135 )") 00:33:37.135 13:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:37.135 [2024-12-06 13:41:23.629504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.629534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 13:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:37.135 13:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:37.135 13:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:37.135 "params": { 00:33:37.135 "name": "Nvme1", 00:33:37.135 "trtype": "tcp", 00:33:37.135 "traddr": "10.0.0.2", 00:33:37.135 "adrfam": "ipv4", 00:33:37.135 "trsvcid": "4420", 00:33:37.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:37.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:37.135 "hdgst": false, 00:33:37.135 "ddgst": false 00:33:37.135 }, 00:33:37.135 "method": "bdev_nvme_attach_controller" 00:33:37.135 }' 00:33:37.135 [2024-12-06 13:41:23.641462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.641472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 [2024-12-06 13:41:23.653464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.653473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 [2024-12-06 13:41:23.665458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.665467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 [2024-12-06 13:41:23.671525] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:33:37.135 [2024-12-06 13:41:23.671578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408970 ] 00:33:37.135 [2024-12-06 13:41:23.677464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.677473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 [2024-12-06 13:41:23.689458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.689466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 [2024-12-06 13:41:23.701460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.701468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 [2024-12-06 13:41:23.713458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.713466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 [2024-12-06 13:41:23.725457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.725466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 [2024-12-06 13:41:23.737457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.737466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 [2024-12-06 13:41:23.749460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.749470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 [2024-12-06 13:41:23.757436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.135 [2024-12-06 13:41:23.761458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.761467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 [2024-12-06 13:41:23.773459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.773468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 [2024-12-06 13:41:23.785459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.135 [2024-12-06 13:41:23.785470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.135 [2024-12-06 13:41:23.786766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.395 [2024-12-06 13:41:23.797461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.797472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.809463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.809476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.821463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.821473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.833459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.833469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.845459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.845467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.857468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.857486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.869459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.869470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.881459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.881470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.893459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.893469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.905457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.905465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.917458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.917466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.929458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.929468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.941458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.941469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.953470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.953486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 Running I/O for 5 seconds... 00:33:37.395 [2024-12-06 13:41:23.965460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.965474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.981564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.981581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:23.994910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:23.994928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:24.008771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:24.008789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:24.021758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:24.021773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:24.036371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:24.036387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.395 [2024-12-06 13:41:24.049038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.395 [2024-12-06 13:41:24.049055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.061976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.664 [2024-12-06 13:41:24.061992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.076737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.664 [2024-12-06 13:41:24.076754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.089832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.664 [2024-12-06 13:41:24.089848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.104601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.664 [2024-12-06 13:41:24.104617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.117701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.664 [2024-12-06 13:41:24.117726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.130512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.664 [2024-12-06 13:41:24.130528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.144511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.664 [2024-12-06 13:41:24.144527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.157058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.664 [2024-12-06 13:41:24.157074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.170011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.664 [2024-12-06 13:41:24.170026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.184287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.664 [2024-12-06 13:41:24.184304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.197452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.664 [2024-12-06 13:41:24.197472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.210238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.664 [2024-12-06 13:41:24.210254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.224383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.664 [2024-12-06 13:41:24.224399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.664 [2024-12-06 13:41:24.237059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.665 [2024-12-06 13:41:24.237075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.665 [2024-12-06 13:41:24.250080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.665 [2024-12-06 13:41:24.250095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.665 [2024-12-06 13:41:24.264205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.665 [2024-12-06 13:41:24.264222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.665 [2024-12-06 13:41:24.277103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.665 [2024-12-06 13:41:24.277118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.665 [2024-12-06 13:41:24.290176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.665 [2024-12-06 13:41:24.290191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.665 [2024-12-06 13:41:24.304588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.665 [2024-12-06 13:41:24.304604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.665 [2024-12-06 13:41:24.317245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.665 [2024-12-06 13:41:24.317261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.330880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.330896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.344688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.344704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.357835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.357850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.372684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.372704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.385968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.385984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.400177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.400193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.413225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.413241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.425694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.425709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.438137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.438152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.452712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.452727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.465600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.465615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.478448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.478467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.492475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.492491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.505376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.505391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.517681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.517697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.530640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.530654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.544440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.544459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.557410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.557425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:37.924 [2024-12-06 13:41:24.570375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:37.924 [2024-12-06 13:41:24.570389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.584512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.584528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.597563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.597578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.610931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.610947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.624731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.624750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.637634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.637650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.650134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.650150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.664794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.664810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.677759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.677773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.692266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.692281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.705317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.705333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.718683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.718698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.732743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.732758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.745717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.745732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.760503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.760518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.773383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.773398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.786571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.786586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.800562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.800577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.813542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.813557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.826466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.826480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.184 [2024-12-06 13:41:24.840541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.184 [2024-12-06 13:41:24.840557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:24.853246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:24.853262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:24.865893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:24.865908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:24.880385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:24.880404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:24.893354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:24.893369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:24.906266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:24.906281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:24.920636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:24.920651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:24.933542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:24.933557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:24.946166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:24.946180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:24.960912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:24.960927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 19188.00 IOPS, 149.91 MiB/s [2024-12-06T12:41:25.102Z] [2024-12-06 13:41:24.973984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:24.973999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:24.988584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:24.988600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:25.001278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:25.001293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:25.014020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:25.014035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:25.028664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:25.028679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:25.041564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:25.041579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:25.054215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:25.054229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:25.068253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.443 [2024-12-06 13:41:25.068268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.443 [2024-12-06 13:41:25.081256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.444 [2024-12-06 13:41:25.081271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.444 [2024-12-06 13:41:25.093997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.444 [2024-12-06 13:41:25.094012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.108673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.108688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.121588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.121604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.134511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.134526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.148777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.148792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.161398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.161415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.174776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.174791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.188567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.188583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.201082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.201098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.213499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.213514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.226080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.226095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.240423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.240438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.253285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.253300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.266380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.266395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.280767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.280782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.293500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.293515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.306054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.306068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.320346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.320362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.333248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.333264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.703 [2024-12-06 13:41:25.346486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.703 [2024-12-06 13:41:25.346501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.360568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.360585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.373600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.373617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.386769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.386784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.401209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.401224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.413989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.414004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.428690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.428706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.441673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.441688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.454325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.454340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.468396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.468412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.481353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.481369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.493910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.493925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.508573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.508589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.521341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.521356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.534527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.534543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.548242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.548258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.561218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.561233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.574060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.574076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.588371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.588386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.601185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.601201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:38.964 [2024-12-06 13:41:25.613553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:38.964 [2024-12-06 13:41:25.613569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.626786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.626802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.640568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.640584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.653358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.653374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.666013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.666028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.680379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.680395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.693395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.693413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.706495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.706510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.720934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.720950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.733960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.733974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.748167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.748182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.761116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.761131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.773683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.773699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.786047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.786062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.800491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.800507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.813606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.813621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.826005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.826020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.840561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.840577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.853844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.853859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.868157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.868172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.225 [2024-12-06 13:41:25.881187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.225 [2024-12-06 13:41:25.881206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:25.893809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:25.893824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:25.908600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:25.908616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:25.921507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:25.921523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:25.934541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:25.934556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:25.948296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:25.948311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:25.961279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:25.961295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 19241.00 IOPS, 150.32 MiB/s [2024-12-06T12:41:26.144Z] [2024-12-06 13:41:25.974148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:25.974163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:25.988739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:25.988755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:26.001831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:26.001846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:26.016397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:26.016413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:26.029208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:26.029224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:26.041926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:26.041941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:26.056887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:26.056903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:26.070007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:26.070022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:26.084411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:26.084427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:26.097282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:26.097297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:26.110560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:26.110575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:26.124443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:26.124462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.485 [2024-12-06 13:41:26.137506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.485 [2024-12-06 13:41:26.137525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.150338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.150354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.164808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.164823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.177882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.177897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.192667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.192683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.205621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.205636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.218719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.218734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.233176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.233191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.246113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.246127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.260304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.260319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.272997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.273012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.285856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.285870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.300469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.300484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.313402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.313418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.326576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.326591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.340404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.340419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.353092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.353108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.366245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.366259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.380720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.380735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:39.745 [2024-12-06 13:41:26.393716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:39.745 [2024-12-06 13:41:26.393734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.408316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.408333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.421405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.421421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.434691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.434706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.448566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.448580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.461527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.461541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.473795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.473809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.487855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.487870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.500883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.500898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.513500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.513515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.526220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.526234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.540846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.540862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.553780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.553794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.568504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.568520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.581298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.581313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.594390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.594405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.005 [2024-12-06 13:41:26.608366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.005 [2024-12-06 13:41:26.608381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.006 [2024-12-06 13:41:26.621044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.006 [2024-12-06 13:41:26.621058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.006 [2024-12-06 13:41:26.634410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.006 [2024-12-06 13:41:26.634424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.006 [2024-12-06 13:41:26.648612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.006 [2024-12-06 13:41:26.648627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.006 [2024-12-06 13:41:26.661586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.006 [2024-12-06 13:41:26.661602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.674047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.674063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.688595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.688610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.701568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.701583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.714328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.714343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.728480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.728495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.741585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.741600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.754291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.754305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.768577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.768592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.781730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.781744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.796580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.796595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.809469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.809483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.822247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.822261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.836885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.836899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.850115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.850130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.864837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.864852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.877759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.877773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.892595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.892611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.905795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.905810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.266 [2024-12-06 13:41:26.920822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.266 [2024-12-06 13:41:26.920837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:26.933764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:26.933780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:26.948271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:26.948286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:26.960844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:26.960859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 19247.33 IOPS, 150.37 MiB/s [2024-12-06T12:41:27.186Z] [2024-12-06 13:41:26.973422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:26.973437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:26.986170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:26.986185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.000384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.000399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.013561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.013576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.026267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.026282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.041153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.041168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.053775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.053789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.069206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.069221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.082272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.082288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.096657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.096673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.109497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.109514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.122319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.122335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.136285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.136301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.148773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.148792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.161437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.161457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.527 [2024-12-06 13:41:27.174155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.527 [2024-12-06 13:41:27.174170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.188174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.188190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.201229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.201245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.213962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.213976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.228552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.228568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.241574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.241589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.254128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.254143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.268238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.268254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.281299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.281314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.294711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.294726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.309086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.309101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.321696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.321712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.334145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.334160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.348378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.348393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.361539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.361555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.374065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.374080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.388332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.388347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.401028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.401047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.414292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.414308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.428988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.429003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:40.789 [2024-12-06 13:41:27.441801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:40.789 [2024-12-06 13:41:27.441816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.456509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.456526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.469525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.469540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.482211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.482226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.496600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.496616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.509510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.509525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.522170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.522185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.536925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.536940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.549781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.549795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.564159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.564174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.577063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.577079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.590003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.590018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.604025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.604041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.616925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.616941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.630222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.630237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.644356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.644371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.656995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.657015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.669517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.669533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.682591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.682606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.050 [2024-12-06 13:41:27.696510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.050 [2024-12-06 13:41:27.696526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.709374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.709390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.722700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.722715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.736493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.736508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.749404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.749420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.761786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.761801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.776085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.776100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.789536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.789551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.802723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.802738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.816617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.816632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.829555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.829571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.842270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.842285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.856896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.856912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.869747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.869761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.884379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.884394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.897117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.897131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.910326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.910348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.924431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.924446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.937271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.937286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.950334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.950349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.311 [2024-12-06 13:41:27.964319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.311 [2024-12-06 13:41:27.964334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 19258.75 IOPS, 150.46 MiB/s [2024-12-06T12:41:28.231Z] [2024-12-06 13:41:27.977346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:27.977361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:27.990225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:27.990239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.004553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.004568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.017353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.017368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.030201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.030215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.044656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.044671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.057645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.057660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.070244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.070259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.084589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.084604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.097199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.097214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.110564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.110579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.125049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.125064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.138089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.138104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.152573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.152588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.165477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.165493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.178484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.178499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.192393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.192408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.205333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.205348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.572 [2024-12-06 13:41:28.218160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.572 [2024-12-06 13:41:28.218175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.232820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.232836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.245515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.245530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.258154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.258169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.272607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.272622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.285571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.285587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.298285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.298299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.313033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.313048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.325978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.325993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.340216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.340230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.353024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.353039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.365929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.365943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.380698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.380714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.393735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.393749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.408369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.408384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.421409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.421424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.433860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.433874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.448254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.448269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.460985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.461000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.474065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.474080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.833 [2024-12-06 13:41:28.488575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.833 [2024-12-06 13:41:28.488591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.501519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.501534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.513998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.514012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.528104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.528119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.541156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.541171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.553681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.553697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.566266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.566280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.580395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.580411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.593451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.593470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.606099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.606113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.620150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.620166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.633027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.633042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.645880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.645896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.660661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.660677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.673511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.673526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.686277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.686292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.700646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.700661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.713591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.713607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.726067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.726082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.095 [2024-12-06 13:41:28.740095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.095 [2024-12-06 13:41:28.740110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.356 [2024-12-06 13:41:28.753132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.356 [2024-12-06 13:41:28.753148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.356 [2024-12-06 13:41:28.766530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.356 [2024-12-06 13:41:28.766545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.356 [2024-12-06 13:41:28.780896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.780910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.793702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.793717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.806472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.806487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.820522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.820537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.833418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.833432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.846888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.846903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.860531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.860547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.873511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.873527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.886150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.886166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.901046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.901061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.913791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.913811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.928568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.928584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.941956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.941971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.956653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.956668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:28.969665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.969681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 19258.80 IOPS, 150.46 MiB/s [2024-12-06T12:41:29.016Z] [2024-12-06 13:41:28.982145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.982160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 00:33:42.357 Latency(us) 00:33:42.357 [2024-12-06T12:41:29.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.357 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:42.357 Nvme1n1 : 5.01 19260.67 150.47 0.00 0.00 6639.59 2689.71 11141.12 00:33:42.357 [2024-12-06T12:41:29.016Z] =================================================================================================================== 00:33:42.357 [2024-12-06T12:41:29.016Z] Total : 19260.67 150.47 0.00 0.00 6639.59 2689.71 11141.12 00:33:42.357 [2024-12-06 13:41:28.993462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:28.993476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.357 [2024-12-06 13:41:29.005473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.357 [2024-12-06 13:41:29.005488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.618 [2024-12-06 13:41:29.017465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.618 [2024-12-06 13:41:29.017477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.618 [2024-12-06 13:41:29.029462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.618 [2024-12-06 13:41:29.029473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.618 [2024-12-06 13:41:29.041462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.618 [2024-12-06 13:41:29.041472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.618 [2024-12-06 13:41:29.053467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.618 [2024-12-06 13:41:29.053476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.618 [2024-12-06 13:41:29.065460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.618 [2024-12-06 13:41:29.065470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.618 [2024-12-06 13:41:29.077461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.618 [2024-12-06 13:41:29.077470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.618 [2024-12-06 13:41:29.089458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.618 [2024-12-06 13:41:29.089467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2408970) - No such process 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2408970 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.618 delay0 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.618 13:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:42.618 [2024-12-06 13:41:29.214748] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:49.204 Initializing NVMe Controllers 00:33:49.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:49.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:49.204 Initialization complete. Launching workers. 00:33:49.204 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3309 00:33:49.204 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3577, failed to submit 52 00:33:49.204 success 3417, unsuccessful 160, failed 0 00:33:49.204 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:49.204 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:49.204 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:49.204 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:49.204 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:49.204 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:49.204 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:49.204 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:49.204 rmmod nvme_tcp 00:33:49.204 rmmod nvme_fabrics 00:33:49.464 rmmod nvme_keyring 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2406910 ']' 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2406910 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2406910 ']' 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2406910 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2406910 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2406910' 00:33:49.464 killing process with pid 2406910 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2406910 00:33:49.464 13:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2406910 00:33:49.464 13:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:49.464 13:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:49.464 13:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:49.464 13:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:49.464 13:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:49.464 13:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:49.464 13:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:49.464 13:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:49.464 13:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:49.464 13:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.464 13:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.464 13:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:52.099 00:33:52.099 real 0m33.662s 00:33:52.099 user 0m42.837s 00:33:52.099 sys 0m12.428s 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:52.099 ************************************ 00:33:52.099 END TEST nvmf_zcopy 00:33:52.099 ************************************ 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:52.099 ************************************ 00:33:52.099 START TEST nvmf_nmic 00:33:52.099 ************************************ 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:52.099 * Looking for test storage... 00:33:52.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:52.099 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:52.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.100 --rc genhtml_branch_coverage=1 00:33:52.100 --rc genhtml_function_coverage=1 00:33:52.100 --rc genhtml_legend=1 00:33:52.100 --rc geninfo_all_blocks=1 00:33:52.100 --rc geninfo_unexecuted_blocks=1 00:33:52.100 00:33:52.100 ' 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:52.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.100 --rc genhtml_branch_coverage=1 00:33:52.100 --rc genhtml_function_coverage=1 00:33:52.100 --rc genhtml_legend=1 00:33:52.100 --rc geninfo_all_blocks=1 00:33:52.100 --rc geninfo_unexecuted_blocks=1 00:33:52.100 00:33:52.100 ' 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:52.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.100 --rc genhtml_branch_coverage=1 00:33:52.100 --rc genhtml_function_coverage=1 00:33:52.100 --rc genhtml_legend=1 00:33:52.100 --rc geninfo_all_blocks=1 00:33:52.100 --rc geninfo_unexecuted_blocks=1 00:33:52.100 00:33:52.100 ' 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:52.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.100 --rc genhtml_branch_coverage=1 00:33:52.100 --rc genhtml_function_coverage=1 00:33:52.100 --rc genhtml_legend=1 00:33:52.100 --rc geninfo_all_blocks=1 00:33:52.100 --rc geninfo_unexecuted_blocks=1 00:33:52.100 00:33:52.100 ' 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:52.100 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:52.101 13:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:58.734 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:58.734 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:58.734 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:58.735 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:58.735 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:58.735 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:58.993 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:58.993 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:58.993 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:58.993 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:58.993 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:58.993 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:58.993 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:58.993 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:58.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:58.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:33:58.993 00:33:58.993 --- 10.0.0.2 ping statistics --- 00:33:58.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.993 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:33:58.993 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:58.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:58.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:33:58.993 00:33:58.993 --- 10.0.0.1 ping statistics --- 00:33:58.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.993 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:33:58.993 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:58.993 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:33:58.993 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:58.994 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:58.994 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:58.994 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:58.994 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:58.994 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:58.994 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:58.994 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:59.253 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:59.253 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:59.253 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:59.253 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2415518 00:33:59.253 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2415518 00:33:59.253 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:59.253 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2415518 ']' 00:33:59.253 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.253 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:59.253 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.253 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:59.253 13:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:59.253 [2024-12-06 13:41:45.715941] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:59.253 [2024-12-06 13:41:45.717042] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:33:59.253 [2024-12-06 13:41:45.717092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:59.253 [2024-12-06 13:41:45.816528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:59.253 [2024-12-06 13:41:45.871223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:59.253 [2024-12-06 13:41:45.871277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:59.253 [2024-12-06 13:41:45.871286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:59.253 [2024-12-06 13:41:45.871293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:59.253 [2024-12-06 13:41:45.871298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:59.253 [2024-12-06 13:41:45.873306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.253 [2024-12-06 13:41:45.873495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:59.253 [2024-12-06 13:41:45.873594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.253 [2024-12-06 13:41:45.873594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:59.513 [2024-12-06 13:41:45.952088] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:59.513 [2024-12-06 13:41:45.952968] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:59.513 [2024-12-06 13:41:45.953346] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:59.513 [2024-12-06 13:41:45.953837] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:59.513 [2024-12-06 13:41:45.953874] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:00.083 [2024-12-06 13:41:46.574758] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:00.083 Malloc0 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:00.083 [2024-12-06 13:41:46.666954] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.083 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:00.084 test case1: single bdev can't be used in multiple subsystems 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:00.084 [2024-12-06 13:41:46.702338] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:00.084 [2024-12-06 13:41:46.702360] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:00.084 [2024-12-06 13:41:46.702368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.084 request: 00:34:00.084 { 00:34:00.084 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:00.084 "namespace": { 00:34:00.084 "bdev_name": "Malloc0", 00:34:00.084 "no_auto_visible": false, 00:34:00.084 "hide_metadata": false 00:34:00.084 }, 00:34:00.084 "method": "nvmf_subsystem_add_ns", 00:34:00.084 "req_id": 1 00:34:00.084 } 00:34:00.084 Got JSON-RPC error response 00:34:00.084 response: 00:34:00.084 { 00:34:00.084 "code": -32602, 00:34:00.084 "message": "Invalid parameters" 00:34:00.084 } 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:00.084 Adding namespace failed - expected result. 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:00.084 test case2: host connect to nvmf target in multiple paths 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:00.084 [2024-12-06 13:41:46.714438] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.084 13:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:00.651 13:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:00.911 13:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:00.911 13:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:00.911 13:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:00.911 13:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:00.911 13:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:02.824 13:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:02.824 13:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:02.824 13:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:02.824 13:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:02.824 13:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:02.824 13:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:02.824 13:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:02.824 [global] 00:34:02.824 thread=1 00:34:02.824 invalidate=1 00:34:02.824 rw=write 00:34:02.824 time_based=1 00:34:02.824 runtime=1 00:34:02.824 ioengine=libaio 00:34:02.824 direct=1 00:34:02.824 bs=4096 00:34:02.824 iodepth=1 00:34:02.824 norandommap=0 00:34:02.824 numjobs=1 00:34:02.824 00:34:02.824 verify_dump=1 00:34:02.824 verify_backlog=512 00:34:02.824 verify_state_save=0 00:34:02.824 do_verify=1 00:34:02.824 verify=crc32c-intel 00:34:02.824 [job0] 00:34:02.824 filename=/dev/nvme0n1 00:34:02.824 Could not set queue depth (nvme0n1) 00:34:03.393 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:03.393 fio-3.35 00:34:03.393 Starting 1 thread 00:34:04.332 00:34:04.332 job0: (groupid=0, jobs=1): err= 0: pid=2416488: Fri Dec 6 13:41:50 2024 00:34:04.332 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1022msec) 00:34:04.332 slat (nsec): min=26267, max=27042, avg=26611.12, stdev=208.72 00:34:04.332 clat (usec): min=40968, max=42030, avg=41806.02, stdev=345.90 00:34:04.332 lat (usec): min=40995, max=42057, avg=41832.63, stdev=345.87 00:34:04.332 clat percentiles (usec): 00:34:04.332 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:04.332 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:04.332 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:04.332 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:04.332 | 99.99th=[42206] 00:34:04.332 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:34:04.332 slat (nsec): min=9137, max=69277, avg=29406.06, stdev=9950.75 00:34:04.332 clat (usec): min=325, max=1232, avg=570.25, stdev=92.50 00:34:04.332 lat (usec): min=337, max=1280, avg=599.66, stdev=97.48 00:34:04.332 clat percentiles (usec): 00:34:04.332 | 1.00th=[ 351], 5.00th=[ 408], 10.00th=[ 453], 20.00th=[ 494], 00:34:04.332 | 30.00th=[ 529], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 594], 00:34:04.332 | 70.00th=[ 619], 80.00th=[ 644], 90.00th=[ 685], 95.00th=[ 709], 00:34:04.332 | 99.00th=[ 758], 99.50th=[ 775], 99.90th=[ 1237], 99.95th=[ 1237], 00:34:04.332 | 99.99th=[ 1237] 00:34:04.332 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:04.332 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:04.332 lat (usec) : 500=20.04%, 750=75.43%, 1000=1.13% 00:34:04.332 lat (msec) : 2=0.19%, 50=3.21% 00:34:04.332 cpu : usr=0.98%, sys=1.86%, ctx=529, majf=0, minf=1 00:34:04.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:04.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:04.332 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:04.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:04.332 00:34:04.332 Run status group 0 (all jobs): 00:34:04.332 READ: bw=66.5KiB/s (68.1kB/s), 66.5KiB/s-66.5KiB/s (68.1kB/s-68.1kB/s), io=68.0KiB (69.6kB), run=1022-1022msec 00:34:04.332 WRITE: bw=2004KiB/s (2052kB/s), 2004KiB/s-2004KiB/s (2052kB/s-2052kB/s), io=2048KiB (2097kB), run=1022-1022msec 00:34:04.332 00:34:04.332 Disk stats (read/write): 00:34:04.332 nvme0n1: ios=64/512, merge=0/0, ticks=649/238, in_queue=887, util=93.59% 00:34:04.332 13:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:04.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:04.591 rmmod nvme_tcp 00:34:04.591 rmmod nvme_fabrics 00:34:04.591 rmmod nvme_keyring 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2415518 ']' 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2415518 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2415518 ']' 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2415518 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2415518 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2415518' 00:34:04.591 killing process with pid 2415518 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2415518 00:34:04.591 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2415518 00:34:04.852 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:04.852 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:04.852 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:04.852 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:04.852 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:04.852 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:04.852 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:04.852 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:04.852 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:04.852 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.852 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.852 13:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:07.396 00:34:07.396 real 0m15.202s 00:34:07.396 user 0m34.723s 00:34:07.396 sys 0m6.963s 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:07.396 ************************************ 00:34:07.396 END TEST nvmf_nmic 00:34:07.396 ************************************ 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:07.396 ************************************ 00:34:07.396 START TEST nvmf_fio_target 00:34:07.396 ************************************ 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:07.396 * Looking for test storage... 00:34:07.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:07.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.396 --rc genhtml_branch_coverage=1 00:34:07.396 --rc genhtml_function_coverage=1 00:34:07.396 --rc genhtml_legend=1 00:34:07.396 --rc geninfo_all_blocks=1 00:34:07.396 --rc geninfo_unexecuted_blocks=1 00:34:07.396 00:34:07.396 ' 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:07.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.396 --rc genhtml_branch_coverage=1 00:34:07.396 --rc genhtml_function_coverage=1 00:34:07.396 --rc genhtml_legend=1 00:34:07.396 --rc geninfo_all_blocks=1 00:34:07.396 --rc geninfo_unexecuted_blocks=1 00:34:07.396 00:34:07.396 ' 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:07.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.396 --rc genhtml_branch_coverage=1 00:34:07.396 --rc genhtml_function_coverage=1 00:34:07.396 --rc genhtml_legend=1 00:34:07.396 --rc geninfo_all_blocks=1 00:34:07.396 --rc geninfo_unexecuted_blocks=1 00:34:07.396 00:34:07.396 ' 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:07.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.396 --rc genhtml_branch_coverage=1 00:34:07.396 --rc genhtml_function_coverage=1 00:34:07.396 --rc genhtml_legend=1 00:34:07.396 --rc geninfo_all_blocks=1 00:34:07.396 --rc geninfo_unexecuted_blocks=1 00:34:07.396 00:34:07.396 ' 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.396 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:07.397 13:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:15.538 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:15.538 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:15.538 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.538 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:15.539 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:15.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:34:15.539 00:34:15.539 --- 10.0.0.2 ping statistics --- 00:34:15.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.539 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:34:15.539 13:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:34:15.539 00:34:15.539 --- 10.0.0.1 ping statistics --- 00:34:15.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.539 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2420884 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2420884 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2420884 ']' 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:15.539 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:15.539 [2024-12-06 13:42:01.097225] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:15.539 [2024-12-06 13:42:01.098371] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:34:15.539 [2024-12-06 13:42:01.098420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.539 [2024-12-06 13:42:01.197396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:15.539 [2024-12-06 13:42:01.250230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.539 [2024-12-06 13:42:01.250281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.539 [2024-12-06 13:42:01.250289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:15.539 [2024-12-06 13:42:01.250297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:15.540 [2024-12-06 13:42:01.250303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.540 [2024-12-06 13:42:01.252339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.540 [2024-12-06 13:42:01.252517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:15.540 [2024-12-06 13:42:01.252620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:15.540 [2024-12-06 13:42:01.252622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.540 [2024-12-06 13:42:01.331405] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:15.540 [2024-12-06 13:42:01.332053] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:15.540 [2024-12-06 13:42:01.332632] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:15.540 [2024-12-06 13:42:01.333223] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:15.540 [2024-12-06 13:42:01.333267] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:15.540 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:15.540 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:15.540 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:15.540 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:15.540 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:15.540 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.540 13:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:15.540 [2024-12-06 13:42:02.141866] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.801 13:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:15.801 13:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:15.801 13:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:16.063 13:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:16.063 13:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:16.324 13:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:16.324 13:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:16.585 13:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:16.585 13:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:16.846 13:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:16.846 13:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:16.846 13:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:17.108 13:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:17.108 13:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:17.369 13:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:17.369 13:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:17.369 13:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:17.630 13:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:17.630 13:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.890 13:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:17.890 13:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:18.151 13:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:18.151 [2024-12-06 13:42:04.745761] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.151 13:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:18.412 13:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:18.673 13:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:18.933 13:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:18.933 13:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:18.933 13:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:18.933 13:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:18.933 13:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:18.933 13:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:21.477 13:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:21.477 13:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:21.477 13:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:21.477 13:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:21.477 13:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:21.477 13:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:21.477 13:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:21.477 [global] 00:34:21.477 thread=1 00:34:21.477 invalidate=1 00:34:21.477 rw=write 00:34:21.477 time_based=1 00:34:21.477 runtime=1 00:34:21.477 ioengine=libaio 00:34:21.477 direct=1 00:34:21.477 bs=4096 00:34:21.477 iodepth=1 00:34:21.477 norandommap=0 00:34:21.477 numjobs=1 00:34:21.477 00:34:21.477 verify_dump=1 00:34:21.477 verify_backlog=512 00:34:21.477 verify_state_save=0 00:34:21.477 do_verify=1 00:34:21.477 verify=crc32c-intel 00:34:21.477 [job0] 00:34:21.477 filename=/dev/nvme0n1 00:34:21.477 [job1] 00:34:21.477 filename=/dev/nvme0n2 00:34:21.477 [job2] 00:34:21.477 filename=/dev/nvme0n3 00:34:21.477 [job3] 00:34:21.477 filename=/dev/nvme0n4 00:34:21.477 Could not set queue depth (nvme0n1) 00:34:21.477 Could not set queue depth (nvme0n2) 00:34:21.477 Could not set queue depth (nvme0n3) 00:34:21.477 Could not set queue depth (nvme0n4) 00:34:21.477 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.477 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.477 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.477 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.477 fio-3.35 00:34:21.477 Starting 4 threads 00:34:22.861 00:34:22.861 job0: (groupid=0, jobs=1): err= 0: pid=2422520: Fri Dec 6 13:42:09 2024 00:34:22.861 read: IOPS=17, BW=71.7KiB/s (73.4kB/s)(72.0KiB/1004msec) 00:34:22.861 slat (nsec): min=27911, max=29325, avg=28349.94, stdev=432.59 00:34:22.861 clat (usec): min=998, max=42040, avg=37399.88, stdev=13224.61 00:34:22.861 lat (usec): min=1028, max=42068, avg=37428.23, stdev=13224.49 00:34:22.861 clat percentiles (usec): 00:34:22.861 | 1.00th=[ 996], 5.00th=[ 996], 10.00th=[ 1106], 20.00th=[41681], 00:34:22.861 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:22.861 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:22.861 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:22.861 | 99.99th=[42206] 00:34:22.861 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:34:22.861 slat (usec): min=9, max=1147, avg=34.92, stdev=50.49 00:34:22.861 clat (usec): min=176, max=1036, avg=603.56, stdev=115.18 00:34:22.861 lat (usec): min=187, max=1804, avg=638.48, stdev=130.68 00:34:22.861 clat percentiles (usec): 00:34:22.861 | 1.00th=[ 314], 5.00th=[ 396], 10.00th=[ 453], 20.00th=[ 523], 00:34:22.861 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 644], 00:34:22.861 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 775], 00:34:22.861 | 99.00th=[ 881], 99.50th=[ 1004], 99.90th=[ 1037], 99.95th=[ 1037], 00:34:22.861 | 99.99th=[ 1037] 00:34:22.861 bw ( KiB/s): min= 4096, max= 4096, per=50.54%, avg=4096.00, stdev= 0.00, samples=1 00:34:22.861 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:22.861 lat (usec) : 250=0.19%, 500=15.85%, 750=72.45%, 1000=7.74% 00:34:22.861 lat (msec) : 2=0.75%, 50=3.02% 00:34:22.861 cpu : usr=1.00%, sys=2.09%, ctx=532, majf=0, minf=1 00:34:22.861 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:22.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.861 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.861 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:22.861 job1: (groupid=0, jobs=1): err= 0: pid=2422521: Fri Dec 6 13:42:09 2024 00:34:22.861 read: IOPS=17, BW=69.5KiB/s (71.2kB/s)(72.0KiB/1036msec) 00:34:22.861 slat (nsec): min=27088, max=27887, avg=27395.11, stdev=216.04 00:34:22.861 clat (usec): min=40885, max=42004, avg=41171.02, stdev=397.33 00:34:22.861 lat (usec): min=40912, max=42032, avg=41198.41, stdev=397.28 00:34:22.861 clat percentiles (usec): 00:34:22.861 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:34:22.861 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:22.861 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:22.861 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:22.861 | 99.99th=[42206] 00:34:22.861 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:34:22.861 slat (usec): min=10, max=30699, avg=96.05, stdev=1356.18 00:34:22.861 clat (usec): min=241, max=957, avg=471.52, stdev=95.25 00:34:22.861 lat (usec): min=277, max=31435, avg=567.57, stdev=1371.35 00:34:22.861 clat percentiles (usec): 00:34:22.861 | 1.00th=[ 281], 5.00th=[ 334], 10.00th=[ 363], 20.00th=[ 396], 00:34:22.861 | 30.00th=[ 429], 40.00th=[ 449], 50.00th=[ 469], 60.00th=[ 486], 00:34:22.861 | 70.00th=[ 502], 80.00th=[ 529], 90.00th=[ 570], 95.00th=[ 644], 00:34:22.861 | 99.00th=[ 758], 99.50th=[ 824], 99.90th=[ 955], 99.95th=[ 955], 00:34:22.861 | 99.99th=[ 955] 00:34:22.861 bw ( KiB/s): min= 4096, max= 4096, per=50.54%, avg=4096.00, stdev= 0.00, samples=1 00:34:22.861 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:22.861 lat (usec) : 250=0.19%, 500=65.85%, 750=29.25%, 1000=1.32% 00:34:22.861 lat (msec) : 50=3.40% 00:34:22.861 cpu : usr=1.06%, sys=1.35%, ctx=533, majf=0, minf=1 00:34:22.861 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:22.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.861 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.861 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:22.861 job2: (groupid=0, jobs=1): err= 0: pid=2422522: Fri Dec 6 13:42:09 2024 00:34:22.861 read: IOPS=436, BW=1746KiB/s (1788kB/s)(1748KiB/1001msec) 00:34:22.861 slat (nsec): min=7701, max=46029, avg=26402.84, stdev=3670.50 00:34:22.861 clat (usec): min=382, max=41004, avg=1683.71, stdev=5423.01 00:34:22.861 lat (usec): min=409, max=41031, avg=1710.11, stdev=5423.07 00:34:22.861 clat percentiles (usec): 00:34:22.861 | 1.00th=[ 594], 5.00th=[ 693], 10.00th=[ 742], 20.00th=[ 783], 00:34:22.861 | 30.00th=[ 840], 40.00th=[ 889], 50.00th=[ 922], 60.00th=[ 963], 00:34:22.861 | 70.00th=[ 988], 80.00th=[ 1045], 90.00th=[ 1139], 95.00th=[ 1188], 00:34:22.861 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:22.861 | 99.99th=[41157] 00:34:22.861 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:22.861 slat (nsec): min=9594, max=62311, avg=31864.04, stdev=10788.78 00:34:22.861 clat (usec): min=122, max=970, avg=447.64, stdev=168.12 00:34:22.861 lat (usec): min=133, max=1006, avg=479.51, stdev=172.48 00:34:22.861 clat percentiles (usec): 00:34:22.861 | 1.00th=[ 137], 5.00th=[ 229], 10.00th=[ 277], 20.00th=[ 310], 00:34:22.861 | 30.00th=[ 338], 40.00th=[ 367], 50.00th=[ 412], 60.00th=[ 457], 00:34:22.861 | 70.00th=[ 502], 80.00th=[ 594], 90.00th=[ 725], 95.00th=[ 775], 00:34:22.861 | 99.00th=[ 848], 99.50th=[ 898], 99.90th=[ 971], 99.95th=[ 971], 00:34:22.861 | 99.99th=[ 971] 00:34:22.862 bw ( KiB/s): min= 4096, max= 4096, per=50.54%, avg=4096.00, stdev= 0.00, samples=1 00:34:22.862 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:22.862 lat (usec) : 250=3.69%, 500=33.93%, 750=18.02%, 1000=32.03% 00:34:22.862 lat (msec) : 2=11.38%, 20=0.11%, 50=0.84% 00:34:22.862 cpu : usr=2.00%, sys=2.30%, ctx=950, majf=0, minf=1 00:34:22.862 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:22.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.862 issued rwts: total=437,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.862 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:22.862 job3: (groupid=0, jobs=1): err= 0: pid=2422523: Fri Dec 6 13:42:09 2024 00:34:22.862 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:22.862 slat (nsec): min=9068, max=46125, avg=26739.82, stdev=2882.66 00:34:22.862 clat (usec): min=488, max=1465, avg=1190.58, stdev=139.75 00:34:22.862 lat (usec): min=515, max=1492, avg=1217.32, stdev=140.07 00:34:22.862 clat percentiles (usec): 00:34:22.862 | 1.00th=[ 742], 5.00th=[ 922], 10.00th=[ 1020], 20.00th=[ 1123], 00:34:22.862 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1221], 00:34:22.862 | 70.00th=[ 1270], 80.00th=[ 1303], 90.00th=[ 1352], 95.00th=[ 1401], 00:34:22.862 | 99.00th=[ 1434], 99.50th=[ 1450], 99.90th=[ 1467], 99.95th=[ 1467], 00:34:22.862 | 99.99th=[ 1467] 00:34:22.862 write: IOPS=562, BW=2250KiB/s (2304kB/s)(2252KiB/1001msec); 0 zone resets 00:34:22.862 slat (nsec): min=9995, max=72290, avg=32928.44, stdev=8736.00 00:34:22.862 clat (usec): min=116, max=1020, avg=620.05, stdev=199.73 00:34:22.862 lat (usec): min=127, max=1056, avg=652.98, stdev=203.05 00:34:22.862 clat percentiles (usec): 00:34:22.862 | 1.00th=[ 141], 5.00th=[ 269], 10.00th=[ 334], 20.00th=[ 441], 00:34:22.862 | 30.00th=[ 515], 40.00th=[ 586], 50.00th=[ 660], 60.00th=[ 709], 00:34:22.862 | 70.00th=[ 750], 80.00th=[ 807], 90.00th=[ 848], 95.00th=[ 889], 00:34:22.862 | 99.00th=[ 979], 99.50th=[ 1004], 99.90th=[ 1020], 99.95th=[ 1020], 00:34:22.862 | 99.99th=[ 1020] 00:34:22.862 bw ( KiB/s): min= 4096, max= 4096, per=50.54%, avg=4096.00, stdev= 0.00, samples=1 00:34:22.862 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:22.862 lat (usec) : 250=1.86%, 500=12.84%, 750=22.88%, 1000=18.33% 00:34:22.862 lat (msec) : 2=44.09% 00:34:22.862 cpu : usr=1.20%, sys=3.80%, ctx=1076, majf=0, minf=1 00:34:22.862 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:22.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.862 issued rwts: total=512,563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.862 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:22.862 00:34:22.862 Run status group 0 (all jobs): 00:34:22.862 READ: bw=3803KiB/s (3894kB/s), 69.5KiB/s-2046KiB/s (71.2kB/s-2095kB/s), io=3940KiB (4035kB), run=1001-1036msec 00:34:22.862 WRITE: bw=8104KiB/s (8299kB/s), 1977KiB/s-2250KiB/s (2024kB/s-2304kB/s), io=8396KiB (8598kB), run=1001-1036msec 00:34:22.862 00:34:22.862 Disk stats (read/write): 00:34:22.862 nvme0n1: ios=62/512, merge=0/0, ticks=578/244, in_queue=822, util=84.07% 00:34:22.862 nvme0n2: ios=72/512, merge=0/0, ticks=1041/235, in_queue=1276, util=90.19% 00:34:22.862 nvme0n3: ios=330/512, merge=0/0, ticks=679/206, in_queue=885, util=95.24% 00:34:22.862 nvme0n4: ios=466/512, merge=0/0, ticks=1051/297, in_queue=1348, util=94.22% 00:34:22.862 13:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:22.862 [global] 00:34:22.862 thread=1 00:34:22.862 invalidate=1 00:34:22.862 rw=randwrite 00:34:22.862 time_based=1 00:34:22.862 runtime=1 00:34:22.862 ioengine=libaio 00:34:22.862 direct=1 00:34:22.862 bs=4096 00:34:22.862 iodepth=1 00:34:22.862 norandommap=0 00:34:22.862 numjobs=1 00:34:22.862 00:34:22.862 verify_dump=1 00:34:22.862 verify_backlog=512 00:34:22.862 verify_state_save=0 00:34:22.862 do_verify=1 00:34:22.862 verify=crc32c-intel 00:34:22.862 [job0] 00:34:22.862 filename=/dev/nvme0n1 00:34:22.862 [job1] 00:34:22.862 filename=/dev/nvme0n2 00:34:22.862 [job2] 00:34:22.862 filename=/dev/nvme0n3 00:34:22.862 [job3] 00:34:22.862 filename=/dev/nvme0n4 00:34:22.862 Could not set queue depth (nvme0n1) 00:34:22.862 Could not set queue depth (nvme0n2) 00:34:22.862 Could not set queue depth (nvme0n3) 00:34:22.862 Could not set queue depth (nvme0n4) 00:34:23.122 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:23.122 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:23.122 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:23.122 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:23.122 fio-3.35 00:34:23.122 Starting 4 threads 00:34:24.522 00:34:24.522 job0: (groupid=0, jobs=1): err= 0: pid=2423265: Fri Dec 6 13:42:10 2024 00:34:24.522 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:24.522 slat (nsec): min=8184, max=60256, avg=27914.01, stdev=2776.55 00:34:24.522 clat (usec): min=530, max=1251, avg=976.13, stdev=108.33 00:34:24.522 lat (usec): min=558, max=1279, avg=1004.05, stdev=108.23 00:34:24.522 clat percentiles (usec): 00:34:24.522 | 1.00th=[ 668], 5.00th=[ 783], 10.00th=[ 848], 20.00th=[ 906], 00:34:24.522 | 30.00th=[ 930], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:34:24.522 | 70.00th=[ 1020], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1156], 00:34:24.522 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1254], 99.95th=[ 1254], 00:34:24.522 | 99.99th=[ 1254] 00:34:24.522 write: IOPS=755, BW=3021KiB/s (3093kB/s)(3024KiB/1001msec); 0 zone resets 00:34:24.522 slat (nsec): min=9286, max=68435, avg=32545.60, stdev=8562.83 00:34:24.522 clat (usec): min=126, max=1063, avg=596.35, stdev=145.42 00:34:24.522 lat (usec): min=135, max=1073, avg=628.90, stdev=148.13 00:34:24.522 clat percentiles (usec): 00:34:24.522 | 1.00th=[ 249], 5.00th=[ 334], 10.00th=[ 388], 20.00th=[ 478], 00:34:24.522 | 30.00th=[ 529], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:34:24.522 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 824], 00:34:24.522 | 99.00th=[ 906], 99.50th=[ 938], 99.90th=[ 1057], 99.95th=[ 1057], 00:34:24.522 | 99.99th=[ 1057] 00:34:24.522 bw ( KiB/s): min= 4096, max= 4096, per=42.70%, avg=4096.00, stdev= 0.00, samples=1 00:34:24.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:24.522 lat (usec) : 250=0.63%, 500=13.33%, 750=39.04%, 1000=30.99% 00:34:24.522 lat (msec) : 2=16.01% 00:34:24.522 cpu : usr=2.30%, sys=5.50%, ctx=1271, majf=0, minf=1 00:34:24.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.522 issued rwts: total=512,756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:24.522 job1: (groupid=0, jobs=1): err= 0: pid=2423266: Fri Dec 6 13:42:10 2024 00:34:24.522 read: IOPS=19, BW=78.0KiB/s (79.9kB/s)(80.0KiB/1025msec) 00:34:24.522 slat (nsec): min=8872, max=26261, avg=22188.85, stdev=6131.22 00:34:24.522 clat (usec): min=789, max=42322, avg=33153.54, stdev=16546.33 00:34:24.522 lat (usec): min=798, max=42332, avg=33175.73, stdev=16549.15 00:34:24.522 clat percentiles (usec): 00:34:24.522 | 1.00th=[ 791], 5.00th=[ 791], 10.00th=[ 930], 20.00th=[ 963], 00:34:24.522 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:24.522 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:24.522 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:24.522 | 99.99th=[42206] 00:34:24.522 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:34:24.522 slat (nsec): min=9199, max=64802, avg=30424.31, stdev=7587.12 00:34:24.522 clat (usec): min=301, max=1754, avg=666.98, stdev=153.43 00:34:24.522 lat (usec): min=312, max=1788, avg=697.40, stdev=154.99 00:34:24.522 clat percentiles (usec): 00:34:24.522 | 1.00th=[ 367], 5.00th=[ 437], 10.00th=[ 498], 20.00th=[ 529], 00:34:24.522 | 30.00th=[ 586], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 701], 00:34:24.522 | 70.00th=[ 742], 80.00th=[ 791], 90.00th=[ 857], 95.00th=[ 906], 00:34:24.522 | 99.00th=[ 1004], 99.50th=[ 1172], 99.90th=[ 1762], 99.95th=[ 1762], 00:34:24.522 | 99.99th=[ 1762] 00:34:24.522 bw ( KiB/s): min= 4096, max= 4096, per=42.70%, avg=4096.00, stdev= 0.00, samples=1 00:34:24.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:24.522 lat (usec) : 500=10.34%, 750=59.96%, 1000=25.56% 00:34:24.522 lat (msec) : 2=1.13%, 50=3.01% 00:34:24.522 cpu : usr=0.49%, sys=1.86%, ctx=532, majf=0, minf=1 00:34:24.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.522 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:24.522 job2: (groupid=0, jobs=1): err= 0: pid=2423273: Fri Dec 6 13:42:10 2024 00:34:24.522 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:24.522 slat (nsec): min=7718, max=64252, avg=28607.64, stdev=3698.23 00:34:24.522 clat (usec): min=593, max=1269, avg=1026.61, stdev=84.47 00:34:24.522 lat (usec): min=622, max=1297, avg=1055.22, stdev=84.42 00:34:24.522 clat percentiles (usec): 00:34:24.522 | 1.00th=[ 783], 5.00th=[ 889], 10.00th=[ 922], 20.00th=[ 971], 00:34:24.522 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1037], 60.00th=[ 1045], 00:34:24.522 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1156], 00:34:24.522 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1270], 99.95th=[ 1270], 00:34:24.522 | 99.99th=[ 1270] 00:34:24.522 write: IOPS=677, BW=2709KiB/s (2774kB/s)(2712KiB/1001msec); 0 zone resets 00:34:24.522 slat (nsec): min=9427, max=54895, avg=31047.29, stdev=10564.42 00:34:24.522 clat (usec): min=239, max=1327, avg=632.01, stdev=127.58 00:34:24.522 lat (usec): min=250, max=1340, avg=663.06, stdev=132.62 00:34:24.522 clat percentiles (usec): 00:34:24.522 | 1.00th=[ 322], 5.00th=[ 396], 10.00th=[ 469], 20.00th=[ 523], 00:34:24.522 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:34:24.522 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 807], 00:34:24.522 | 99.00th=[ 914], 99.50th=[ 947], 99.90th=[ 1336], 99.95th=[ 1336], 00:34:24.522 | 99.99th=[ 1336] 00:34:24.522 bw ( KiB/s): min= 4096, max= 4096, per=42.70%, avg=4096.00, stdev= 0.00, samples=1 00:34:24.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:24.522 lat (usec) : 250=0.17%, 500=8.82%, 750=39.24%, 1000=23.03% 00:34:24.522 lat (msec) : 2=28.74% 00:34:24.522 cpu : usr=2.00%, sys=5.20%, ctx=1191, majf=0, minf=1 00:34:24.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.522 issued rwts: total=512,678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:24.522 job3: (groupid=0, jobs=1): err= 0: pid=2423278: Fri Dec 6 13:42:10 2024 00:34:24.522 read: IOPS=16, BW=67.8KiB/s (69.4kB/s)(68.0KiB/1003msec) 00:34:24.522 slat (nsec): min=25559, max=29682, avg=27279.76, stdev=825.47 00:34:24.522 clat (usec): min=1130, max=42086, avg=39431.30, stdev=9873.69 00:34:24.522 lat (usec): min=1157, max=42113, avg=39458.58, stdev=9873.61 00:34:24.522 clat percentiles (usec): 00:34:24.522 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41157], 20.00th=[41681], 00:34:24.522 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:24.522 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:24.522 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:24.522 | 99.99th=[42206] 00:34:24.522 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:34:24.522 slat (nsec): min=9004, max=53570, avg=30572.02, stdev=8858.38 00:34:24.522 clat (usec): min=245, max=1164, avg=610.15, stdev=138.40 00:34:24.522 lat (usec): min=255, max=1197, avg=640.73, stdev=141.43 00:34:24.522 clat percentiles (usec): 00:34:24.522 | 1.00th=[ 293], 5.00th=[ 388], 10.00th=[ 429], 20.00th=[ 498], 00:34:24.522 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:34:24.522 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 824], 00:34:24.522 | 99.00th=[ 947], 99.50th=[ 1012], 99.90th=[ 1172], 99.95th=[ 1172], 00:34:24.522 | 99.99th=[ 1172] 00:34:24.522 bw ( KiB/s): min= 4096, max= 4096, per=42.70%, avg=4096.00, stdev= 0.00, samples=1 00:34:24.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:24.522 lat (usec) : 250=0.38%, 500=19.85%, 750=61.81%, 1000=14.18% 00:34:24.522 lat (msec) : 2=0.76%, 50=3.02% 00:34:24.522 cpu : usr=1.30%, sys=1.90%, ctx=529, majf=0, minf=2 00:34:24.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.522 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:24.522 00:34:24.522 Run status group 0 (all jobs): 00:34:24.523 READ: bw=4140KiB/s (4240kB/s), 67.8KiB/s-2046KiB/s (69.4kB/s-2095kB/s), io=4244KiB (4346kB), run=1001-1025msec 00:34:24.523 WRITE: bw=9592KiB/s (9822kB/s), 1998KiB/s-3021KiB/s (2046kB/s-3093kB/s), io=9832KiB (10.1MB), run=1001-1025msec 00:34:24.523 00:34:24.523 Disk stats (read/write): 00:34:24.523 nvme0n1: ios=549/512, merge=0/0, ticks=594/222, in_queue=816, util=90.78% 00:34:24.523 nvme0n2: ios=65/512, merge=0/0, ticks=541/329, in_queue=870, util=90.83% 00:34:24.523 nvme0n3: ios=512/512, merge=0/0, ticks=1581/267, in_queue=1848, util=96.52% 00:34:24.523 nvme0n4: ios=42/512, merge=0/0, ticks=630/241, in_queue=871, util=94.55% 00:34:24.523 13:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:24.523 [global] 00:34:24.523 thread=1 00:34:24.523 invalidate=1 00:34:24.523 rw=write 00:34:24.523 time_based=1 00:34:24.523 runtime=1 00:34:24.523 ioengine=libaio 00:34:24.523 direct=1 00:34:24.523 bs=4096 00:34:24.523 iodepth=128 00:34:24.523 norandommap=0 00:34:24.523 numjobs=1 00:34:24.523 00:34:24.523 verify_dump=1 00:34:24.523 verify_backlog=512 00:34:24.523 verify_state_save=0 00:34:24.523 do_verify=1 00:34:24.523 verify=crc32c-intel 00:34:24.523 [job0] 00:34:24.523 filename=/dev/nvme0n1 00:34:24.523 [job1] 00:34:24.523 filename=/dev/nvme0n2 00:34:24.523 [job2] 00:34:24.523 filename=/dev/nvme0n3 00:34:24.523 [job3] 00:34:24.523 filename=/dev/nvme0n4 00:34:24.523 Could not set queue depth (nvme0n1) 00:34:24.523 Could not set queue depth (nvme0n2) 00:34:24.523 Could not set queue depth (nvme0n3) 00:34:24.523 Could not set queue depth (nvme0n4) 00:34:24.783 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:24.783 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:24.783 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:24.783 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:24.783 fio-3.35 00:34:24.783 Starting 4 threads 00:34:26.164 00:34:26.164 job0: (groupid=0, jobs=1): err= 0: pid=2424000: Fri Dec 6 13:42:12 2024 00:34:26.164 read: IOPS=5145, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1007msec) 00:34:26.164 slat (nsec): min=913, max=23150k, avg=105243.25, stdev=768950.56 00:34:26.164 clat (usec): min=1863, max=55190, avg=13225.58, stdev=8802.70 00:34:26.164 lat (usec): min=1865, max=55194, avg=13330.82, stdev=8843.63 00:34:26.164 clat percentiles (usec): 00:34:26.164 | 1.00th=[ 3884], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7898], 00:34:26.164 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10814], 00:34:26.164 | 70.00th=[12780], 80.00th=[16057], 90.00th=[25297], 95.00th=[32375], 00:34:26.164 | 99.00th=[48497], 99.50th=[54789], 99.90th=[55313], 99.95th=[55313], 00:34:26.164 | 99.99th=[55313] 00:34:26.164 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:34:26.164 slat (nsec): min=1611, max=12142k, avg=77582.35, stdev=520313.15 00:34:26.164 clat (usec): min=1317, max=42940, avg=10465.77, stdev=6420.45 00:34:26.164 lat (usec): min=1328, max=42943, avg=10543.35, stdev=6446.87 00:34:26.164 clat percentiles (usec): 00:34:26.164 | 1.00th=[ 4817], 5.00th=[ 5932], 10.00th=[ 6194], 20.00th=[ 6587], 00:34:26.164 | 30.00th=[ 7177], 40.00th=[ 8225], 50.00th=[ 8717], 60.00th=[ 8979], 00:34:26.164 | 70.00th=[ 9634], 80.00th=[11731], 90.00th=[17695], 95.00th=[25560], 00:34:26.164 | 99.00th=[39060], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:26.164 | 99.99th=[42730] 00:34:26.164 bw ( KiB/s): min=17008, max=27528, per=24.41%, avg=22268.00, stdev=7438.76, samples=2 00:34:26.164 iops : min= 4252, max= 6882, avg=5567.00, stdev=1859.69, samples=2 00:34:26.164 lat (msec) : 2=0.10%, 4=0.51%, 10=60.08%, 20=28.54%, 50=10.49% 00:34:26.164 lat (msec) : 100=0.29% 00:34:26.164 cpu : usr=2.19%, sys=3.38%, ctx=550, majf=0, minf=1 00:34:26.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:26.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:26.164 issued rwts: total=5182,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:26.164 job1: (groupid=0, jobs=1): err= 0: pid=2424012: Fri Dec 6 13:42:12 2024 00:34:26.164 read: IOPS=6734, BW=26.3MiB/s (27.6MB/s)(26.5MiB/1007msec) 00:34:26.164 slat (nsec): min=948, max=9002.4k, avg=73535.75, stdev=537800.39 00:34:26.164 clat (usec): min=1123, max=23521, avg=9134.53, stdev=2955.29 00:34:26.164 lat (usec): min=2524, max=23523, avg=9208.07, stdev=2993.09 00:34:26.164 clat percentiles (usec): 00:34:26.164 | 1.00th=[ 3916], 5.00th=[ 5932], 10.00th=[ 6325], 20.00th=[ 6915], 00:34:26.164 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 8979], 00:34:26.164 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[12911], 95.00th=[15401], 00:34:26.164 | 99.00th=[19792], 99.50th=[21103], 99.90th=[22152], 99.95th=[23462], 00:34:26.164 | 99.99th=[23462] 00:34:26.164 write: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec); 0 zone resets 00:34:26.164 slat (nsec): min=1620, max=17624k, avg=65755.05, stdev=449792.64 00:34:26.164 clat (usec): min=1356, max=23519, avg=9181.01, stdev=4065.27 00:34:26.164 lat (usec): min=1363, max=23522, avg=9246.77, stdev=4084.41 00:34:26.164 clat percentiles (usec): 00:34:26.164 | 1.00th=[ 3163], 5.00th=[ 4113], 10.00th=[ 4817], 20.00th=[ 5997], 00:34:26.164 | 30.00th=[ 6652], 40.00th=[ 7111], 50.00th=[ 7635], 60.00th=[ 9110], 00:34:26.164 | 70.00th=[10814], 80.00th=[13435], 90.00th=[14877], 95.00th=[16909], 00:34:26.164 | 99.00th=[19792], 99.50th=[20579], 99.90th=[21890], 99.95th=[22152], 00:34:26.164 | 99.99th=[23462] 00:34:26.164 bw ( KiB/s): min=28656, max=28672, per=31.42%, avg=28664.00, stdev=11.31, samples=2 00:34:26.164 iops : min= 7164, max= 7168, avg=7166.00, stdev= 2.83, samples=2 00:34:26.164 lat (msec) : 2=0.16%, 4=2.67%, 10=64.99%, 20=31.32%, 50=0.86% 00:34:26.164 cpu : usr=4.77%, sys=6.46%, ctx=521, majf=0, minf=1 00:34:26.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:26.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:26.164 issued rwts: total=6782,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:26.164 job2: (groupid=0, jobs=1): err= 0: pid=2424016: Fri Dec 6 13:42:12 2024 00:34:26.164 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:34:26.164 slat (nsec): min=916, max=13708k, avg=97630.20, stdev=679514.01 00:34:26.164 clat (usec): min=3987, max=48637, avg=12386.26, stdev=7106.12 00:34:26.164 lat (usec): min=3994, max=48652, avg=12483.89, stdev=7166.16 00:34:26.164 clat percentiles (usec): 00:34:26.164 | 1.00th=[ 5276], 5.00th=[ 6718], 10.00th=[ 7308], 20.00th=[ 7832], 00:34:26.164 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[10028], 60.00th=[11207], 00:34:26.164 | 70.00th=[12387], 80.00th=[14091], 90.00th=[21627], 95.00th=[29230], 00:34:26.164 | 99.00th=[38536], 99.50th=[42730], 99.90th=[47973], 99.95th=[47973], 00:34:26.164 | 99.99th=[48497] 00:34:26.164 write: IOPS=5532, BW=21.6MiB/s (22.7MB/s)(21.7MiB/1005msec); 0 zone resets 00:34:26.164 slat (nsec): min=1565, max=14676k, avg=84202.48, stdev=611308.52 00:34:26.164 clat (usec): min=367, max=41473, avg=11526.00, stdev=6713.76 00:34:26.164 lat (usec): min=401, max=41506, avg=11610.21, stdev=6772.54 00:34:26.164 clat percentiles (usec): 00:34:26.164 | 1.00th=[ 1876], 5.00th=[ 5014], 10.00th=[ 5735], 20.00th=[ 6849], 00:34:26.164 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[10421], 00:34:26.164 | 70.00th=[12649], 80.00th=[16057], 90.00th=[22414], 95.00th=[25822], 00:34:26.164 | 99.00th=[32900], 99.50th=[33424], 99.90th=[34866], 99.95th=[40109], 00:34:26.164 | 99.99th=[41681] 00:34:26.164 bw ( KiB/s): min=18024, max=25440, per=23.82%, avg=21732.00, stdev=5243.90, samples=2 00:34:26.164 iops : min= 4506, max= 6360, avg=5433.00, stdev=1310.98, samples=2 00:34:26.164 lat (usec) : 500=0.01%, 750=0.01% 00:34:26.164 lat (msec) : 2=0.66%, 4=1.34%, 10=52.28%, 20=32.55%, 50=13.16% 00:34:26.164 cpu : usr=4.08%, sys=4.88%, ctx=451, majf=0, minf=3 00:34:26.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:26.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:26.164 issued rwts: total=5120,5560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:26.164 job3: (groupid=0, jobs=1): err= 0: pid=2424017: Fri Dec 6 13:42:12 2024 00:34:26.164 read: IOPS=4330, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1006msec) 00:34:26.164 slat (nsec): min=917, max=15686k, avg=117854.24, stdev=830224.48 00:34:26.164 clat (usec): min=1089, max=59338, avg=15653.30, stdev=10024.97 00:34:26.164 lat (usec): min=1707, max=61872, avg=15771.16, stdev=10070.85 00:34:26.164 clat percentiles (usec): 00:34:26.164 | 1.00th=[ 4359], 5.00th=[ 5800], 10.00th=[ 6915], 20.00th=[ 8455], 00:34:26.164 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[12780], 60.00th=[13960], 00:34:26.164 | 70.00th=[16057], 80.00th=[21365], 90.00th=[30016], 95.00th=[38536], 00:34:26.164 | 99.00th=[49546], 99.50th=[54789], 99.90th=[59507], 99.95th=[59507], 00:34:26.164 | 99.99th=[59507] 00:34:26.164 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:34:26.164 slat (nsec): min=1581, max=14359k, avg=99554.71, stdev=667538.34 00:34:26.164 clat (usec): min=512, max=65729, avg=12894.35, stdev=9433.18 00:34:26.164 lat (usec): min=526, max=65738, avg=12993.91, stdev=9489.41 00:34:26.164 clat percentiles (usec): 00:34:26.164 | 1.00th=[ 1614], 5.00th=[ 4490], 10.00th=[ 6128], 20.00th=[ 7177], 00:34:26.164 | 30.00th=[ 7963], 40.00th=[ 9110], 50.00th=[10290], 60.00th=[11338], 00:34:26.164 | 70.00th=[13566], 80.00th=[15926], 90.00th=[25035], 95.00th=[31851], 00:34:26.164 | 99.00th=[59507], 99.50th=[62129], 99.90th=[65799], 99.95th=[65799], 00:34:26.164 | 99.99th=[65799] 00:34:26.164 bw ( KiB/s): min=16384, max=20480, per=20.20%, avg=18432.00, stdev=2896.31, samples=2 00:34:26.164 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:34:26.164 lat (usec) : 750=0.02% 00:34:26.164 lat (msec) : 2=1.09%, 4=0.97%, 10=37.57%, 20=42.97%, 50=16.18% 00:34:26.164 lat (msec) : 100=1.19% 00:34:26.164 cpu : usr=3.48%, sys=3.78%, ctx=425, majf=0, minf=1 00:34:26.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:26.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:26.164 issued rwts: total=4356,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:26.164 00:34:26.164 Run status group 0 (all jobs): 00:34:26.164 READ: bw=83.2MiB/s (87.2MB/s), 16.9MiB/s-26.3MiB/s (17.7MB/s-27.6MB/s), io=83.8MiB (87.8MB), run=1005-1007msec 00:34:26.164 WRITE: bw=89.1MiB/s (93.4MB/s), 17.9MiB/s-27.8MiB/s (18.8MB/s-29.2MB/s), io=89.7MiB (94.1MB), run=1005-1007msec 00:34:26.164 00:34:26.164 Disk stats (read/write): 00:34:26.164 nvme0n1: ios=4694/5120, merge=0/0, ticks=16708/15518, in_queue=32226, util=97.29% 00:34:26.164 nvme0n2: ios=5620/5632, merge=0/0, ticks=50302/52047, in_queue=102349, util=96.33% 00:34:26.164 nvme0n3: ios=4096/4293, merge=0/0, ticks=27729/30709, in_queue=58438, util=87.95% 00:34:26.164 nvme0n4: ios=3623/3742, merge=0/0, ticks=21586/19890, in_queue=41476, util=91.34% 00:34:26.164 13:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:26.164 [global] 00:34:26.164 thread=1 00:34:26.164 invalidate=1 00:34:26.164 rw=randwrite 00:34:26.164 time_based=1 00:34:26.164 runtime=1 00:34:26.164 ioengine=libaio 00:34:26.164 direct=1 00:34:26.164 bs=4096 00:34:26.164 iodepth=128 00:34:26.164 norandommap=0 00:34:26.164 numjobs=1 00:34:26.164 00:34:26.164 verify_dump=1 00:34:26.164 verify_backlog=512 00:34:26.164 verify_state_save=0 00:34:26.164 do_verify=1 00:34:26.164 verify=crc32c-intel 00:34:26.164 [job0] 00:34:26.164 filename=/dev/nvme0n1 00:34:26.164 [job1] 00:34:26.165 filename=/dev/nvme0n2 00:34:26.165 [job2] 00:34:26.165 filename=/dev/nvme0n3 00:34:26.165 [job3] 00:34:26.165 filename=/dev/nvme0n4 00:34:26.165 Could not set queue depth (nvme0n1) 00:34:26.165 Could not set queue depth (nvme0n2) 00:34:26.165 Could not set queue depth (nvme0n3) 00:34:26.165 Could not set queue depth (nvme0n4) 00:34:26.427 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:26.427 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:26.427 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:26.427 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:26.427 fio-3.35 00:34:26.427 Starting 4 threads 00:34:27.812 00:34:27.812 job0: (groupid=0, jobs=1): err= 0: pid=2424488: Fri Dec 6 13:42:14 2024 00:34:27.812 read: IOPS=5428, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1003msec) 00:34:27.812 slat (nsec): min=926, max=16791k, avg=86175.81, stdev=620865.65 00:34:27.812 clat (usec): min=2058, max=52445, avg=11169.43, stdev=6842.51 00:34:27.812 lat (usec): min=2614, max=52471, avg=11255.60, stdev=6896.51 00:34:27.812 clat percentiles (usec): 00:34:27.812 | 1.00th=[ 4113], 5.00th=[ 5866], 10.00th=[ 6718], 20.00th=[ 7570], 00:34:27.812 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10290], 00:34:27.812 | 70.00th=[11076], 80.00th=[12518], 90.00th=[14222], 95.00th=[28443], 00:34:27.812 | 99.00th=[42206], 99.50th=[45351], 99.90th=[50594], 99.95th=[50594], 00:34:27.812 | 99.99th=[52691] 00:34:27.812 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:34:27.812 slat (nsec): min=1572, max=15397k, avg=88764.12, stdev=665523.56 00:34:27.812 clat (usec): min=684, max=52040, avg=11792.84, stdev=7869.44 00:34:27.812 lat (usec): min=693, max=52071, avg=11881.60, stdev=7941.49 00:34:27.812 clat percentiles (usec): 00:34:27.812 | 1.00th=[ 4178], 5.00th=[ 5145], 10.00th=[ 6521], 20.00th=[ 7373], 00:34:27.812 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10159], 00:34:27.812 | 70.00th=[10945], 80.00th=[12125], 90.00th=[22676], 95.00th=[33817], 00:34:27.812 | 99.00th=[38536], 99.50th=[42730], 99.90th=[46400], 99.95th=[47449], 00:34:27.812 | 99.99th=[52167] 00:34:27.812 bw ( KiB/s): min=21752, max=23304, per=24.02%, avg=22528.00, stdev=1097.43, samples=2 00:34:27.812 iops : min= 5438, max= 5826, avg=5632.00, stdev=274.36, samples=2 00:34:27.812 lat (usec) : 750=0.03% 00:34:27.812 lat (msec) : 2=0.21%, 4=0.45%, 10=57.35%, 20=32.96%, 50=8.89% 00:34:27.812 lat (msec) : 100=0.11% 00:34:27.812 cpu : usr=3.19%, sys=5.49%, ctx=450, majf=0, minf=2 00:34:27.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:27.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:27.812 issued rwts: total=5445,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:27.812 job1: (groupid=0, jobs=1): err= 0: pid=2424505: Fri Dec 6 13:42:14 2024 00:34:27.812 read: IOPS=6611, BW=25.8MiB/s (27.1MB/s)(25.9MiB/1003msec) 00:34:27.812 slat (nsec): min=903, max=9552.2k, avg=73765.36, stdev=427781.11 00:34:27.812 clat (usec): min=1527, max=35489, avg=9421.52, stdev=3884.52 00:34:27.812 lat (usec): min=4224, max=35496, avg=9495.29, stdev=3904.64 00:34:27.812 clat percentiles (usec): 00:34:27.812 | 1.00th=[ 4686], 5.00th=[ 6325], 10.00th=[ 6915], 20.00th=[ 7504], 00:34:27.812 | 30.00th=[ 7898], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 8979], 00:34:27.812 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[11338], 95.00th=[14615], 00:34:27.812 | 99.00th=[29492], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:34:27.812 | 99.99th=[35390] 00:34:27.812 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:34:27.812 slat (nsec): min=1555, max=13035k, avg=71598.43, stdev=426622.06 00:34:27.812 clat (usec): min=3756, max=35588, avg=9590.11, stdev=4502.38 00:34:27.812 lat (usec): min=3763, max=35611, avg=9661.71, stdev=4522.85 00:34:27.812 clat percentiles (usec): 00:34:27.812 | 1.00th=[ 5080], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7111], 00:34:27.812 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8979], 00:34:27.812 | 70.00th=[ 9241], 80.00th=[10159], 90.00th=[13042], 95.00th=[17695], 00:34:27.812 | 99.00th=[28967], 99.50th=[29492], 99.90th=[35390], 99.95th=[35390], 00:34:27.812 | 99.99th=[35390] 00:34:27.812 bw ( KiB/s): min=24512, max=28736, per=28.38%, avg=26624.00, stdev=2986.82, samples=2 00:34:27.812 iops : min= 6128, max= 7184, avg=6656.00, stdev=746.70, samples=2 00:34:27.812 lat (msec) : 2=0.01%, 4=0.10%, 10=80.03%, 20=16.50%, 50=3.36% 00:34:27.812 cpu : usr=3.59%, sys=5.99%, ctx=738, majf=0, minf=1 00:34:27.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:27.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:27.812 issued rwts: total=6631,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:27.812 job2: (groupid=0, jobs=1): err= 0: pid=2424523: Fri Dec 6 13:42:14 2024 00:34:27.812 read: IOPS=6296, BW=24.6MiB/s (25.8MB/s)(25.8MiB/1048msec) 00:34:27.812 slat (nsec): min=968, max=12854k, avg=77695.68, stdev=505827.88 00:34:27.812 clat (usec): min=1927, max=58742, avg=10915.63, stdev=7186.50 00:34:27.812 lat (usec): min=1935, max=58750, avg=10993.32, stdev=7208.98 00:34:27.812 clat percentiles (usec): 00:34:27.812 | 1.00th=[ 4293], 5.00th=[ 6587], 10.00th=[ 7308], 20.00th=[ 8225], 00:34:27.812 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:34:27.812 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[13566], 95.00th=[21627], 00:34:27.812 | 99.00th=[50594], 99.50th=[54264], 99.90th=[58459], 99.95th=[58459], 00:34:27.812 | 99.99th=[58983] 00:34:27.812 write: IOPS=6351, BW=24.8MiB/s (26.0MB/s)(26.0MiB/1048msec); 0 zone resets 00:34:27.812 slat (nsec): min=1591, max=14543k, avg=67310.84, stdev=445124.10 00:34:27.812 clat (usec): min=1767, max=29862, avg=9042.36, stdev=3052.83 00:34:27.812 lat (usec): min=1781, max=29872, avg=9109.67, stdev=3070.54 00:34:27.812 clat percentiles (usec): 00:34:27.812 | 1.00th=[ 4621], 5.00th=[ 5407], 10.00th=[ 6390], 20.00th=[ 7439], 00:34:27.812 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9110], 00:34:27.812 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[13698], 00:34:27.812 | 99.00th=[22676], 99.50th=[27395], 99.90th=[29754], 99.95th=[29754], 00:34:27.812 | 99.99th=[29754] 00:34:27.812 bw ( KiB/s): min=26000, max=27248, per=28.38%, avg=26624.00, stdev=882.47, samples=2 00:34:27.812 iops : min= 6500, max= 6812, avg=6656.00, stdev=220.62, samples=2 00:34:27.812 lat (msec) : 2=0.05%, 4=0.27%, 10=77.52%, 20=18.36%, 50=3.03% 00:34:27.812 lat (msec) : 100=0.78% 00:34:27.812 cpu : usr=4.30%, sys=6.78%, ctx=539, majf=0, minf=1 00:34:27.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:27.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:27.812 issued rwts: total=6599,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:27.812 job3: (groupid=0, jobs=1): err= 0: pid=2424529: Fri Dec 6 13:42:14 2024 00:34:27.812 read: IOPS=5206, BW=20.3MiB/s (21.3MB/s)(20.5MiB/1008msec) 00:34:27.812 slat (nsec): min=952, max=10041k, avg=82225.27, stdev=644289.14 00:34:27.812 clat (usec): min=1679, max=33064, avg=11546.16, stdev=4547.73 00:34:27.812 lat (usec): min=1706, max=33088, avg=11628.38, stdev=4607.52 00:34:27.812 clat percentiles (usec): 00:34:27.812 | 1.00th=[ 3556], 5.00th=[ 6783], 10.00th=[ 8356], 20.00th=[ 8979], 00:34:27.812 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10552], 00:34:27.812 | 70.00th=[11600], 80.00th=[13698], 90.00th=[19006], 95.00th=[21627], 00:34:27.812 | 99.00th=[25297], 99.50th=[26084], 99.90th=[30016], 99.95th=[30802], 00:34:27.812 | 99.99th=[33162] 00:34:27.812 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:34:27.812 slat (nsec): min=1588, max=8290.0k, avg=79496.19, stdev=493889.94 00:34:27.812 clat (usec): min=781, max=58880, avg=11946.68, stdev=8295.18 00:34:27.812 lat (usec): min=790, max=58890, avg=12026.17, stdev=8344.41 00:34:27.812 clat percentiles (usec): 00:34:27.812 | 1.00th=[ 1418], 5.00th=[ 3032], 10.00th=[ 4621], 20.00th=[ 6718], 00:34:27.812 | 30.00th=[ 7701], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[11076], 00:34:27.812 | 70.00th=[13829], 80.00th=[16188], 90.00th=[20055], 95.00th=[27132], 00:34:27.812 | 99.00th=[48497], 99.50th=[54789], 99.90th=[58983], 99.95th=[58983], 00:34:27.812 | 99.99th=[58983] 00:34:27.812 bw ( KiB/s): min=20480, max=24400, per=23.92%, avg=22440.00, stdev=2771.86, samples=2 00:34:27.812 iops : min= 5120, max= 6100, avg=5610.00, stdev=692.96, samples=2 00:34:27.812 lat (usec) : 1000=0.04% 00:34:27.812 lat (msec) : 2=1.17%, 4=3.93%, 10=48.25%, 20=37.71%, 50=8.47% 00:34:27.812 lat (msec) : 100=0.43% 00:34:27.812 cpu : usr=4.37%, sys=5.76%, ctx=426, majf=0, minf=1 00:34:27.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:27.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:27.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:27.812 issued rwts: total=5248,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:27.812 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:27.812 00:34:27.812 Run status group 0 (all jobs): 00:34:27.812 READ: bw=89.2MiB/s (93.5MB/s), 20.3MiB/s-25.8MiB/s (21.3MB/s-27.1MB/s), io=93.4MiB (98.0MB), run=1003-1048msec 00:34:27.812 WRITE: bw=91.6MiB/s (96.1MB/s), 21.8MiB/s-25.9MiB/s (22.9MB/s-27.2MB/s), io=96.0MiB (101MB), run=1003-1048msec 00:34:27.812 00:34:27.812 Disk stats (read/write): 00:34:27.812 nvme0n1: ios=4153/4608, merge=0/0, ticks=23855/22387, in_queue=46242, util=92.48% 00:34:27.812 nvme0n2: ios=5890/6144, merge=0/0, ticks=17990/18770, in_queue=36760, util=96.13% 00:34:27.812 nvme0n3: ios=5181/5632, merge=0/0, ticks=21427/21491, in_queue=42918, util=100.00% 00:34:27.812 nvme0n4: ios=4231/4608, merge=0/0, ticks=39848/39353, in_queue=79201, util=97.87% 00:34:27.812 13:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:27.812 13:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2424593 00:34:27.812 13:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:27.812 13:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:27.812 [global] 00:34:27.812 thread=1 00:34:27.812 invalidate=1 00:34:27.812 rw=read 00:34:27.812 time_based=1 00:34:27.812 runtime=10 00:34:27.812 ioengine=libaio 00:34:27.812 direct=1 00:34:27.812 bs=4096 00:34:27.812 iodepth=1 00:34:27.812 norandommap=1 00:34:27.812 numjobs=1 00:34:27.812 00:34:27.812 [job0] 00:34:27.812 filename=/dev/nvme0n1 00:34:27.812 [job1] 00:34:27.813 filename=/dev/nvme0n2 00:34:27.813 [job2] 00:34:27.813 filename=/dev/nvme0n3 00:34:27.813 [job3] 00:34:27.813 filename=/dev/nvme0n4 00:34:27.813 Could not set queue depth (nvme0n1) 00:34:27.813 Could not set queue depth (nvme0n2) 00:34:27.813 Could not set queue depth (nvme0n3) 00:34:27.813 Could not set queue depth (nvme0n4) 00:34:28.072 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:28.072 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:28.072 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:28.072 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:28.072 fio-3.35 00:34:28.072 Starting 4 threads 00:34:31.377 13:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:31.377 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1605632, buflen=4096 00:34:31.377 fio: pid=2424995, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:31.377 13:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:31.377 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=7475200, buflen=4096 00:34:31.377 fio: pid=2424984, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:31.377 13:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:31.377 13:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:31.377 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2019328, buflen=4096 00:34:31.377 fio: pid=2424929, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:31.377 13:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:31.377 13:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:31.377 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=6594560, buflen=4096 00:34:31.377 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:31.377 fio: pid=2424955, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:31.377 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:31.639 00:34:31.639 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2424929: Fri Dec 6 13:42:18 2024 00:34:31.639 read: IOPS=170, BW=681KiB/s (697kB/s)(1972KiB/2897msec) 00:34:31.639 slat (usec): min=24, max=11567, avg=53.68, stdev=523.48 00:34:31.639 clat (usec): min=857, max=42130, avg=5814.23, stdev=12969.83 00:34:31.639 lat (usec): min=884, max=42929, avg=5867.97, stdev=12979.86 00:34:31.639 clat percentiles (usec): 00:34:31.639 | 1.00th=[ 922], 5.00th=[ 996], 10.00th=[ 1029], 20.00th=[ 1057], 00:34:31.639 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1106], 00:34:31.639 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[41157], 95.00th=[41681], 00:34:31.639 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:31.639 | 99.99th=[42206] 00:34:31.639 bw ( KiB/s): min= 96, max= 1576, per=13.80%, avg=771.20, stdev=675.91, samples=5 00:34:31.639 iops : min= 24, max= 394, avg=192.80, stdev=168.98, samples=5 00:34:31.639 lat (usec) : 1000=5.26% 00:34:31.639 lat (msec) : 2=82.79%, 50=11.74% 00:34:31.639 cpu : usr=0.38%, sys=0.62%, ctx=497, majf=0, minf=2 00:34:31.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:31.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.639 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.639 issued rwts: total=494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:31.639 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2424955: Fri Dec 6 13:42:18 2024 00:34:31.639 read: IOPS=520, BW=2082KiB/s (2132kB/s)(6440KiB/3093msec) 00:34:31.639 slat (usec): min=6, max=19487, avg=48.12, stdev=630.26 00:34:31.639 clat (usec): min=353, max=44564, avg=1867.04, stdev=6585.10 00:34:31.639 lat (usec): min=360, max=46955, avg=1915.12, stdev=6633.09 00:34:31.639 clat percentiles (usec): 00:34:31.639 | 1.00th=[ 506], 5.00th=[ 611], 10.00th=[ 652], 20.00th=[ 701], 00:34:31.639 | 30.00th=[ 734], 40.00th=[ 750], 50.00th=[ 766], 60.00th=[ 783], 00:34:31.639 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 1020], 00:34:31.639 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[44827], 00:34:31.639 | 99.99th=[44827] 00:34:31.639 bw ( KiB/s): min= 96, max= 5152, per=38.14%, avg=2131.33, stdev=2358.49, samples=6 00:34:31.639 iops : min= 24, max= 1288, avg=532.83, stdev=589.62, samples=6 00:34:31.639 lat (usec) : 500=0.87%, 750=39.79%, 1000=54.07% 00:34:31.639 lat (msec) : 2=2.42%, 10=0.06%, 50=2.73% 00:34:31.639 cpu : usr=0.61%, sys=1.33%, ctx=1616, majf=0, minf=1 00:34:31.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:31.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.639 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.639 issued rwts: total=1611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:31.639 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2424984: Fri Dec 6 13:42:18 2024 00:34:31.639 read: IOPS=675, BW=2700KiB/s (2764kB/s)(7300KiB/2704msec) 00:34:31.639 slat (usec): min=5, max=17266, avg=40.60, stdev=508.18 00:34:31.639 clat (usec): min=346, max=41287, avg=1434.89, stdev=5281.80 00:34:31.639 lat (usec): min=372, max=41299, avg=1475.51, stdev=5304.14 00:34:31.639 clat percentiles (usec): 00:34:31.639 | 1.00th=[ 437], 5.00th=[ 523], 10.00th=[ 553], 20.00th=[ 619], 00:34:31.639 | 30.00th=[ 676], 40.00th=[ 717], 50.00th=[ 750], 60.00th=[ 783], 00:34:31.639 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 848], 95.00th=[ 930], 00:34:31.639 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:31.639 | 99.99th=[41157] 00:34:31.639 bw ( KiB/s): min= 96, max= 5632, per=45.55%, avg=2545.60, stdev=2470.94, samples=5 00:34:31.639 iops : min= 24, max= 1408, avg=636.40, stdev=617.74, samples=5 00:34:31.639 lat (usec) : 500=3.50%, 750=45.51%, 1000=46.50% 00:34:31.639 lat (msec) : 2=2.63%, 10=0.05%, 50=1.75% 00:34:31.639 cpu : usr=0.59%, sys=1.96%, ctx=1828, majf=0, minf=2 00:34:31.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:31.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.639 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.639 issued rwts: total=1826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:31.639 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2424995: Fri Dec 6 13:42:18 2024 00:34:31.639 read: IOPS=155, BW=619KiB/s (633kB/s)(1568KiB/2535msec) 00:34:31.639 slat (nsec): min=5903, max=60933, avg=23820.02, stdev=9042.44 00:34:31.639 clat (usec): min=384, max=42077, avg=6429.76, stdev=14068.25 00:34:31.639 lat (usec): min=412, max=42104, avg=6453.57, stdev=14069.98 00:34:31.639 clat percentiles (usec): 00:34:31.639 | 1.00th=[ 457], 5.00th=[ 553], 10.00th=[ 586], 20.00th=[ 644], 00:34:31.639 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 758], 60.00th=[ 791], 00:34:31.639 | 70.00th=[ 840], 80.00th=[ 938], 90.00th=[41157], 95.00th=[41157], 00:34:31.639 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:31.639 | 99.99th=[42206] 00:34:31.639 bw ( KiB/s): min= 96, max= 2552, per=11.19%, avg=625.60, stdev=1078.91, samples=5 00:34:31.639 iops : min= 24, max= 638, avg=156.40, stdev=269.73, samples=5 00:34:31.639 lat (usec) : 500=2.04%, 750=46.56%, 1000=34.10% 00:34:31.639 lat (msec) : 2=2.80%, 10=0.25%, 50=13.99% 00:34:31.639 cpu : usr=0.04%, sys=0.51%, ctx=394, majf=0, minf=2 00:34:31.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:31.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.639 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.639 issued rwts: total=393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:31.639 00:34:31.639 Run status group 0 (all jobs): 00:34:31.639 READ: bw=5587KiB/s (5721kB/s), 619KiB/s-2700KiB/s (633kB/s-2764kB/s), io=16.9MiB (17.7MB), run=2535-3093msec 00:34:31.639 00:34:31.639 Disk stats (read/write): 00:34:31.639 nvme0n1: ios=490/0, merge=0/0, ticks=2705/0, in_queue=2705, util=92.49% 00:34:31.639 nvme0n2: ios=1608/0, merge=0/0, ticks=2885/0, in_queue=2885, util=93.15% 00:34:31.639 nvme0n3: ios=1632/0, merge=0/0, ticks=2451/0, in_queue=2451, util=95.51% 00:34:31.639 nvme0n4: ios=423/0, merge=0/0, ticks=3062/0, in_queue=3062, util=99.28% 00:34:31.639 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:31.639 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:31.901 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:31.901 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:32.161 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:32.161 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:32.161 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:32.161 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:32.422 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:32.422 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2424593 00:34:32.422 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:32.422 13:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:32.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:32.422 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:32.422 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:32.422 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:32.422 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:32.422 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:32.422 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:32.422 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:32.422 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:32.422 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:32.422 nvmf hotplug test: fio failed as expected 00:34:32.422 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:32.683 rmmod nvme_tcp 00:34:32.683 rmmod nvme_fabrics 00:34:32.683 rmmod nvme_keyring 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2420884 ']' 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2420884 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2420884 ']' 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2420884 00:34:32.683 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2420884 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2420884' 00:34:32.944 killing process with pid 2420884 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2420884 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2420884 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.944 13:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.490 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:35.490 00:34:35.490 real 0m28.072s 00:34:35.490 user 2m13.413s 00:34:35.490 sys 0m11.801s 00:34:35.490 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.490 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.490 ************************************ 00:34:35.490 END TEST nvmf_fio_target 00:34:35.490 ************************************ 00:34:35.490 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:35.490 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:35.491 ************************************ 00:34:35.491 START TEST nvmf_bdevio 00:34:35.491 ************************************ 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:35.491 * Looking for test storage... 00:34:35.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:35.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.491 --rc genhtml_branch_coverage=1 00:34:35.491 --rc genhtml_function_coverage=1 00:34:35.491 --rc genhtml_legend=1 00:34:35.491 --rc geninfo_all_blocks=1 00:34:35.491 --rc geninfo_unexecuted_blocks=1 00:34:35.491 00:34:35.491 ' 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:35.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.491 --rc genhtml_branch_coverage=1 00:34:35.491 --rc genhtml_function_coverage=1 00:34:35.491 --rc genhtml_legend=1 00:34:35.491 --rc geninfo_all_blocks=1 00:34:35.491 --rc geninfo_unexecuted_blocks=1 00:34:35.491 00:34:35.491 ' 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:35.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.491 --rc genhtml_branch_coverage=1 00:34:35.491 --rc genhtml_function_coverage=1 00:34:35.491 --rc genhtml_legend=1 00:34:35.491 --rc geninfo_all_blocks=1 00:34:35.491 --rc geninfo_unexecuted_blocks=1 00:34:35.491 00:34:35.491 ' 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:35.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.491 --rc genhtml_branch_coverage=1 00:34:35.491 --rc genhtml_function_coverage=1 00:34:35.491 --rc genhtml_legend=1 00:34:35.491 --rc geninfo_all_blocks=1 00:34:35.491 --rc geninfo_unexecuted_blocks=1 00:34:35.491 00:34:35.491 ' 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.491 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:35.492 13:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.636 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:43.637 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:43.637 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:43.637 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:43.637 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:43.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:34:43.637 00:34:43.637 --- 10.0.0.2 ping statistics --- 00:34:43.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.637 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:43.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:43.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:34:43.637 00:34:43.637 --- 10.0.0.1 ping statistics --- 00:34:43.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.637 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2429982 00:34:43.637 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2429982 00:34:43.638 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:43.638 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2429982 ']' 00:34:43.638 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.638 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:43.638 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.638 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:43.638 13:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:43.638 [2024-12-06 13:42:29.448941] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:43.638 [2024-12-06 13:42:29.450057] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:34:43.638 [2024-12-06 13:42:29.450105] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.638 [2024-12-06 13:42:29.547865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:43.638 [2024-12-06 13:42:29.600189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.638 [2024-12-06 13:42:29.600239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.638 [2024-12-06 13:42:29.600248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:43.638 [2024-12-06 13:42:29.600257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:43.638 [2024-12-06 13:42:29.600263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.638 [2024-12-06 13:42:29.602368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:43.638 [2024-12-06 13:42:29.602527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:43.638 [2024-12-06 13:42:29.602687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:43.638 [2024-12-06 13:42:29.602687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:43.638 [2024-12-06 13:42:29.680615] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:43.638 [2024-12-06 13:42:29.681707] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:43.638 [2024-12-06 13:42:29.681958] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:43.638 [2024-12-06 13:42:29.682425] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:43.638 [2024-12-06 13:42:29.682477] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:43.638 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:43.638 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:43.638 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:43.638 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:43.638 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:43.899 [2024-12-06 13:42:30.327713] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:43.899 Malloc0 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:43.899 [2024-12-06 13:42:30.427853] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:43.899 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:43.899 { 00:34:43.899 "params": { 00:34:43.899 "name": "Nvme$subsystem", 00:34:43.899 "trtype": "$TEST_TRANSPORT", 00:34:43.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.899 "adrfam": "ipv4", 00:34:43.899 "trsvcid": "$NVMF_PORT", 00:34:43.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.899 "hdgst": ${hdgst:-false}, 00:34:43.900 "ddgst": ${ddgst:-false} 00:34:43.900 }, 00:34:43.900 "method": "bdev_nvme_attach_controller" 00:34:43.900 } 00:34:43.900 EOF 00:34:43.900 )") 00:34:43.900 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:43.900 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:43.900 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:43.900 13:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:43.900 "params": { 00:34:43.900 "name": "Nvme1", 00:34:43.900 "trtype": "tcp", 00:34:43.900 "traddr": "10.0.0.2", 00:34:43.900 "adrfam": "ipv4", 00:34:43.900 "trsvcid": "4420", 00:34:43.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:43.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:43.900 "hdgst": false, 00:34:43.900 "ddgst": false 00:34:43.900 }, 00:34:43.900 "method": "bdev_nvme_attach_controller" 00:34:43.900 }' 00:34:43.900 [2024-12-06 13:42:30.495621] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:34:43.900 [2024-12-06 13:42:30.495700] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2430128 ] 00:34:44.162 [2024-12-06 13:42:30.589508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:44.162 [2024-12-06 13:42:30.646003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.162 [2024-12-06 13:42:30.646166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.162 [2024-12-06 13:42:30.646166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:44.424 I/O targets: 00:34:44.424 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:44.424 00:34:44.424 00:34:44.424 CUnit - A unit testing framework for C - Version 2.1-3 00:34:44.424 http://cunit.sourceforge.net/ 00:34:44.424 00:34:44.424 00:34:44.424 Suite: bdevio tests on: Nvme1n1 00:34:44.424 Test: blockdev write read block ...passed 00:34:44.686 Test: blockdev write zeroes read block ...passed 00:34:44.686 Test: blockdev write zeroes read no split ...passed 00:34:44.686 Test: blockdev write zeroes read split ...passed 00:34:44.686 Test: blockdev write zeroes read split partial ...passed 00:34:44.686 Test: blockdev reset ...[2024-12-06 13:42:31.179020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:44.686 [2024-12-06 13:42:31.179121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e5580 (9): Bad file descriptor 00:34:44.686 [2024-12-06 13:42:31.273133] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:44.686 passed 00:34:44.686 Test: blockdev write read 8 blocks ...passed 00:34:44.686 Test: blockdev write read size > 128k ...passed 00:34:44.686 Test: blockdev write read invalid size ...passed 00:34:44.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:44.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:44.946 Test: blockdev write read max offset ...passed 00:34:44.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:44.946 Test: blockdev writev readv 8 blocks ...passed 00:34:44.946 Test: blockdev writev readv 30 x 1block ...passed 00:34:44.946 Test: blockdev writev readv block ...passed 00:34:44.946 Test: blockdev writev readv size > 128k ...passed 00:34:44.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:44.946 Test: blockdev comparev and writev ...[2024-12-06 13:42:31.532565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.946 [2024-12-06 13:42:31.532614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:44.946 [2024-12-06 13:42:31.532630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.946 [2024-12-06 13:42:31.532639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:44.946 [2024-12-06 13:42:31.533161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.946 [2024-12-06 13:42:31.533174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:44.946 [2024-12-06 13:42:31.533189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.947 [2024-12-06 13:42:31.533197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:44.947 [2024-12-06 13:42:31.533723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.947 [2024-12-06 13:42:31.533736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:44.947 [2024-12-06 13:42:31.533753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.947 [2024-12-06 13:42:31.533762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:44.947 [2024-12-06 13:42:31.534265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.947 [2024-12-06 13:42:31.534278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:44.947 [2024-12-06 13:42:31.534292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:44.947 [2024-12-06 13:42:31.534302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:44.947 passed 00:34:45.208 Test: blockdev nvme passthru rw ...passed 00:34:45.208 Test: blockdev nvme passthru vendor specific ...[2024-12-06 13:42:31.618106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:45.208 [2024-12-06 13:42:31.618122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:45.208 [2024-12-06 13:42:31.618392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:45.208 [2024-12-06 13:42:31.618404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:45.208 [2024-12-06 13:42:31.618690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:45.208 [2024-12-06 13:42:31.618701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:45.208 [2024-12-06 13:42:31.618942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:45.208 [2024-12-06 13:42:31.618960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:45.208 passed 00:34:45.208 Test: blockdev nvme admin passthru ...passed 00:34:45.208 Test: blockdev copy ...passed 00:34:45.208 00:34:45.208 Run Summary: Type Total Ran Passed Failed Inactive 00:34:45.208 suites 1 1 n/a 0 0 00:34:45.208 tests 23 23 23 0 0 00:34:45.208 asserts 152 152 152 0 n/a 00:34:45.208 00:34:45.208 Elapsed time = 1.328 seconds 00:34:45.208 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:45.208 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.208 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:45.208 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.208 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:45.208 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:45.208 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:45.208 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:45.208 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:45.208 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:45.208 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:45.208 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:45.208 rmmod nvme_tcp 00:34:45.208 rmmod nvme_fabrics 00:34:45.468 rmmod nvme_keyring 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2429982 ']' 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2429982 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2429982 ']' 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2429982 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2429982 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2429982' 00:34:45.468 killing process with pid 2429982 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2429982 00:34:45.468 13:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2429982 00:34:45.468 13:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:45.468 13:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:45.468 13:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:45.468 13:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:45.468 13:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:45.468 13:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:45.468 13:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:45.730 13:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:45.730 13:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:45.730 13:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.730 13:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:45.730 13:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.641 13:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:47.641 00:34:47.641 real 0m12.525s 00:34:47.641 user 0m11.307s 00:34:47.641 sys 0m6.524s 00:34:47.641 13:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:47.641 13:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:47.641 ************************************ 00:34:47.641 END TEST nvmf_bdevio 00:34:47.641 ************************************ 00:34:47.641 13:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:47.641 00:34:47.641 real 5m0.568s 00:34:47.641 user 10m14.352s 00:34:47.641 sys 2m4.210s 00:34:47.641 13:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:47.641 13:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:47.641 ************************************ 00:34:47.641 END TEST nvmf_target_core_interrupt_mode 00:34:47.641 ************************************ 00:34:47.641 13:42:34 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:47.641 13:42:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:47.641 13:42:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:47.641 13:42:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:47.902 ************************************ 00:34:47.902 START TEST nvmf_interrupt 00:34:47.902 ************************************ 00:34:47.902 13:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:47.902 * Looking for test storage... 00:34:47.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:47.902 13:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:47.902 13:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:34:47.902 13:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:47.902 13:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:47.902 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:47.902 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:47.902 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:47.902 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:47.902 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:47.902 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:47.902 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:47.902 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:47.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.903 --rc genhtml_branch_coverage=1 00:34:47.903 --rc genhtml_function_coverage=1 00:34:47.903 --rc genhtml_legend=1 00:34:47.903 --rc geninfo_all_blocks=1 00:34:47.903 --rc geninfo_unexecuted_blocks=1 00:34:47.903 00:34:47.903 ' 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:47.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.903 --rc genhtml_branch_coverage=1 00:34:47.903 --rc genhtml_function_coverage=1 00:34:47.903 --rc genhtml_legend=1 00:34:47.903 --rc geninfo_all_blocks=1 00:34:47.903 --rc geninfo_unexecuted_blocks=1 00:34:47.903 00:34:47.903 ' 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:47.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.903 --rc genhtml_branch_coverage=1 00:34:47.903 --rc genhtml_function_coverage=1 00:34:47.903 --rc genhtml_legend=1 00:34:47.903 --rc geninfo_all_blocks=1 00:34:47.903 --rc geninfo_unexecuted_blocks=1 00:34:47.903 00:34:47.903 ' 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:47.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.903 --rc genhtml_branch_coverage=1 00:34:47.903 --rc genhtml_function_coverage=1 00:34:47.903 --rc genhtml_legend=1 00:34:47.903 --rc geninfo_all_blocks=1 00:34:47.903 --rc geninfo_unexecuted_blocks=1 00:34:47.903 00:34:47.903 ' 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.903 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:48.165 13:42:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:56.407 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:56.407 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:56.407 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:56.407 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.407 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:56.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:56.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:34:56.408 00:34:56.408 --- 10.0.0.2 ping statistics --- 00:34:56.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.408 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:56.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:56.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:34:56.408 00:34:56.408 --- 10.0.0.1 ping statistics --- 00:34:56.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.408 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:56.408 13:42:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2434568 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2434568 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2434568 ']' 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.408 [2024-12-06 13:42:42.064934] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:56.408 [2024-12-06 13:42:42.066063] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:34:56.408 [2024-12-06 13:42:42.066114] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.408 [2024-12-06 13:42:42.166952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:56.408 [2024-12-06 13:42:42.218495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.408 [2024-12-06 13:42:42.218545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.408 [2024-12-06 13:42:42.218554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.408 [2024-12-06 13:42:42.218561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.408 [2024-12-06 13:42:42.218567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.408 [2024-12-06 13:42:42.220205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.408 [2024-12-06 13:42:42.220209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.408 [2024-12-06 13:42:42.298124] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:56.408 [2024-12-06 13:42:42.298875] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:56.408 [2024-12-06 13:42:42.299105] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:56.408 5000+0 records in 00:34:56.408 5000+0 records out 00:34:56.408 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0190409 s, 538 MB/s 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.408 AIO0 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.408 13:42:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.408 [2024-12-06 13:42:42.989225] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:56.408 [2024-12-06 13:42:43.033681] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2434568 0 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2434568 0 idle 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2434568 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:56.408 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:56.409 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:56.409 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:56.409 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:56.409 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:56.409 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:56.409 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:56.409 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2434568 -w 256 00:34:56.409 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2434568 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.31 reactor_0' 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2434568 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.31 reactor_0 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2434568 1 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2434568 1 idle 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2434568 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2434568 -w 256 00:34:56.726 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2434614 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2434614 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2434848 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2434568 0 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2434568 0 busy 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2434568 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2434568 -w 256 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2434568 root 20 0 128.2g 44928 32256 R 46.7 0.0 0:00.39 reactor_0' 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2434568 root 20 0 128.2g 44928 32256 R 46.7 0.0 0:00.39 reactor_0 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=46.7 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=46 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:56.987 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2434568 1 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2434568 1 busy 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2434568 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2434568 -w 256 00:34:56.988 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:57.249 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2434614 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.23 reactor_1' 00:34:57.249 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2434614 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.23 reactor_1 00:34:57.249 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:57.249 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:57.249 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:57.249 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:57.249 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:57.249 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:57.249 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:57.249 13:42:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:57.249 13:42:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2434848 00:35:07.242 Initializing NVMe Controllers 00:35:07.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:07.242 Controller IO queue size 256, less than required. 00:35:07.242 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:07.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:07.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:07.242 Initialization complete. Launching workers. 00:35:07.242 ======================================================== 00:35:07.242 Latency(us) 00:35:07.242 Device Information : IOPS MiB/s Average min max 00:35:07.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19207.60 75.03 13334.12 4269.29 51542.26 00:35:07.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19998.40 78.12 12801.98 7950.24 30707.42 00:35:07.242 ======================================================== 00:35:07.242 Total : 39206.00 153.15 13062.68 4269.29 51542.26 00:35:07.242 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2434568 0 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2434568 0 idle 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2434568 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2434568 -w 256 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2434568 root 20 0 128.2g 44928 32256 S 6.2 0.0 0:20.31 reactor_0' 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2434568 root 20 0 128.2g 44928 32256 S 6.2 0.0 0:20.31 reactor_0 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2434568 1 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2434568 1 idle 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2434568 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2434568 -w 256 00:35:07.242 13:42:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:07.502 13:42:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2434614 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:35:07.502 13:42:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2434614 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:35:07.502 13:42:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:07.502 13:42:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:07.502 13:42:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:07.502 13:42:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:07.502 13:42:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:07.502 13:42:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:07.502 13:42:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:07.502 13:42:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:07.502 13:42:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:08.092 13:42:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:08.092 13:42:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:08.092 13:42:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:08.092 13:42:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:08.092 13:42:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2434568 0 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2434568 0 idle 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2434568 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2434568 -w 256 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2434568 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.69 reactor_0' 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2434568 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.69 reactor_0 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2434568 1 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2434568 1 idle 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2434568 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2434568 -w 256 00:35:10.633 13:42:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:10.633 13:42:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2434614 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:35:10.633 13:42:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2434614 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:35:10.633 13:42:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:10.633 13:42:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:10.633 13:42:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:10.633 13:42:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:10.633 13:42:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:10.633 13:42:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:10.633 13:42:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:10.633 13:42:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:10.633 13:42:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:10.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:10.893 rmmod nvme_tcp 00:35:10.893 rmmod nvme_fabrics 00:35:10.893 rmmod nvme_keyring 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2434568 ']' 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2434568 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2434568 ']' 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2434568 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2434568 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2434568' 00:35:10.893 killing process with pid 2434568 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2434568 00:35:10.893 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2434568 00:35:11.152 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:11.153 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:11.153 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:11.153 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:11.153 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:11.153 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:11.153 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:11.153 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:11.153 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:11.153 13:42:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.153 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:11.153 13:42:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.061 13:42:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:13.061 00:35:13.061 real 0m25.369s 00:35:13.061 user 0m40.402s 00:35:13.061 sys 0m9.584s 00:35:13.061 13:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.061 13:42:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:13.061 ************************************ 00:35:13.061 END TEST nvmf_interrupt 00:35:13.061 ************************************ 00:35:13.322 00:35:13.322 real 30m3.818s 00:35:13.322 user 61m29.771s 00:35:13.322 sys 10m13.591s 00:35:13.322 13:42:59 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.322 13:42:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.322 ************************************ 00:35:13.322 END TEST nvmf_tcp 00:35:13.322 ************************************ 00:35:13.322 13:42:59 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:13.322 13:42:59 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:13.322 13:42:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:13.322 13:42:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:13.322 13:42:59 -- common/autotest_common.sh@10 -- # set +x 00:35:13.322 ************************************ 00:35:13.322 START TEST spdkcli_nvmf_tcp 00:35:13.322 ************************************ 00:35:13.322 13:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:13.322 * Looking for test storage... 00:35:13.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:13.322 13:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:13.322 13:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:35:13.322 13:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:13.583 13:42:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:13.583 13:42:59 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:13.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.583 --rc genhtml_branch_coverage=1 00:35:13.583 --rc genhtml_function_coverage=1 00:35:13.583 --rc genhtml_legend=1 00:35:13.583 --rc geninfo_all_blocks=1 00:35:13.583 --rc geninfo_unexecuted_blocks=1 00:35:13.583 00:35:13.583 ' 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:13.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.583 --rc genhtml_branch_coverage=1 00:35:13.583 --rc genhtml_function_coverage=1 00:35:13.583 --rc genhtml_legend=1 00:35:13.583 --rc geninfo_all_blocks=1 00:35:13.583 --rc geninfo_unexecuted_blocks=1 00:35:13.583 00:35:13.583 ' 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:13.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.583 --rc genhtml_branch_coverage=1 00:35:13.583 --rc genhtml_function_coverage=1 00:35:13.583 --rc genhtml_legend=1 00:35:13.583 --rc geninfo_all_blocks=1 00:35:13.583 --rc geninfo_unexecuted_blocks=1 00:35:13.583 00:35:13.583 ' 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:13.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.583 --rc genhtml_branch_coverage=1 00:35:13.583 --rc genhtml_function_coverage=1 00:35:13.583 --rc genhtml_legend=1 00:35:13.583 --rc geninfo_all_blocks=1 00:35:13.583 --rc geninfo_unexecuted_blocks=1 00:35:13.583 00:35:13.583 ' 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:13.583 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:13.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2438053 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2438053 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2438053 ']' 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.584 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:13.584 [2024-12-06 13:43:00.120783] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:35:13.584 [2024-12-06 13:43:00.120856] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2438053 ] 00:35:13.584 [2024-12-06 13:43:00.214104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:13.844 [2024-12-06 13:43:00.269470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.844 [2024-12-06 13:43:00.269479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.415 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.415 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:14.415 13:43:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:14.415 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:14.415 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:14.415 13:43:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:14.415 13:43:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:14.415 13:43:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:14.415 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:14.415 13:43:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:14.415 13:43:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:14.415 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:14.415 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:14.415 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:14.415 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:14.415 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:14.415 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:14.415 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:14.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:14.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:14.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:14.415 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:14.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:14.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:14.416 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:14.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:14.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:14.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:14.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:14.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:14.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:14.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:14.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:14.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:14.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:14.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:14.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:14.416 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:14.416 ' 00:35:17.717 [2024-12-06 13:43:03.681544] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:18.659 [2024-12-06 13:43:05.037639] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:21.203 [2024-12-06 13:43:07.564664] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:23.747 [2024-12-06 13:43:09.787004] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:25.133 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:25.133 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:25.133 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:25.133 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:25.133 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:25.133 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:25.133 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:25.133 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:25.133 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:25.133 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:25.133 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:25.133 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:25.133 13:43:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:25.133 13:43:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:25.133 13:43:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:25.133 13:43:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:25.133 13:43:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:25.133 13:43:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:25.133 13:43:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:25.133 13:43:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:25.394 13:43:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:25.655 13:43:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:25.655 13:43:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:25.655 13:43:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:25.655 13:43:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:25.656 13:43:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:25.656 13:43:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:25.656 13:43:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:25.656 13:43:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:25.656 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:25.656 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:25.656 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:25.656 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:25.656 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:25.656 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:25.656 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:25.656 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:25.656 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:25.656 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:25.656 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:25.656 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:25.656 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:25.656 ' 00:35:32.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:32.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:32.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:32.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:32.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:32.243 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:32.243 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:32.243 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:32.243 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:32.243 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:32.243 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:32.243 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:32.243 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:32.243 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2438053 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2438053 ']' 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2438053 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2438053 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2438053' 00:35:32.243 killing process with pid 2438053 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2438053 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2438053 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2438053 ']' 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2438053 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2438053 ']' 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2438053 00:35:32.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2438053) - No such process 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2438053 is not found' 00:35:32.243 Process with pid 2438053 is not found 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:32.243 00:35:32.243 real 0m18.168s 00:35:32.243 user 0m40.340s 00:35:32.243 sys 0m0.883s 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.243 13:43:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:32.243 ************************************ 00:35:32.243 END TEST spdkcli_nvmf_tcp 00:35:32.243 ************************************ 00:35:32.243 13:43:18 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:32.243 13:43:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:32.243 13:43:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.243 13:43:18 -- common/autotest_common.sh@10 -- # set +x 00:35:32.243 ************************************ 00:35:32.243 START TEST nvmf_identify_passthru 00:35:32.243 ************************************ 00:35:32.243 13:43:18 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:32.243 * Looking for test storage... 00:35:32.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:32.243 13:43:18 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:32.243 13:43:18 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:35:32.243 13:43:18 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:32.243 13:43:18 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:32.243 13:43:18 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:32.243 13:43:18 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:32.243 13:43:18 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:32.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.243 --rc genhtml_branch_coverage=1 00:35:32.243 --rc genhtml_function_coverage=1 00:35:32.243 --rc genhtml_legend=1 00:35:32.243 --rc geninfo_all_blocks=1 00:35:32.243 --rc geninfo_unexecuted_blocks=1 00:35:32.243 00:35:32.243 ' 00:35:32.243 13:43:18 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:32.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.243 --rc genhtml_branch_coverage=1 00:35:32.243 --rc genhtml_function_coverage=1 00:35:32.243 --rc genhtml_legend=1 00:35:32.243 --rc geninfo_all_blocks=1 00:35:32.243 --rc geninfo_unexecuted_blocks=1 00:35:32.243 00:35:32.243 ' 00:35:32.243 13:43:18 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:32.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.243 --rc genhtml_branch_coverage=1 00:35:32.243 --rc genhtml_function_coverage=1 00:35:32.243 --rc genhtml_legend=1 00:35:32.243 --rc geninfo_all_blocks=1 00:35:32.243 --rc geninfo_unexecuted_blocks=1 00:35:32.243 00:35:32.243 ' 00:35:32.243 13:43:18 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:32.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.243 --rc genhtml_branch_coverage=1 00:35:32.243 --rc genhtml_function_coverage=1 00:35:32.243 --rc genhtml_legend=1 00:35:32.243 --rc geninfo_all_blocks=1 00:35:32.243 --rc geninfo_unexecuted_blocks=1 00:35:32.243 00:35:32.243 ' 00:35:32.243 13:43:18 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.243 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:32.243 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.243 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:32.244 13:43:18 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:32.244 13:43:18 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.244 13:43:18 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.244 13:43:18 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.244 13:43:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.244 13:43:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.244 13:43:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.244 13:43:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:32.244 13:43:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:32.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:32.244 13:43:18 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:32.244 13:43:18 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:32.244 13:43:18 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.244 13:43:18 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.244 13:43:18 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.244 13:43:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.244 13:43:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.244 13:43:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.244 13:43:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:32.244 13:43:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.244 13:43:18 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.244 13:43:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:32.244 13:43:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:32.244 13:43:18 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:32.244 13:43:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:38.827 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:38.828 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:38.828 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:38.828 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:38.828 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:38.828 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:39.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:39.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:35:39.089 00:35:39.089 --- 10.0.0.2 ping statistics --- 00:35:39.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.089 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:39.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:39.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:35:39.089 00:35:39.089 --- 10.0.0.1 ping statistics --- 00:35:39.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.089 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:39.089 13:43:25 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:39.089 13:43:25 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:39.089 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:39.089 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:39.350 13:43:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:39.350 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:39.350 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:39.350 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:39.350 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:39.350 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:39.350 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:39.350 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:39.350 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:39.350 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:39.350 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:39.350 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:35:39.350 13:43:25 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:35:39.350 13:43:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:39.350 13:43:25 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:39.350 13:43:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:39.350 13:43:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:39.350 13:43:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:39.921 13:43:26 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:39.921 13:43:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:39.921 13:43:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:39.921 13:43:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:40.491 13:43:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:40.491 13:43:26 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:40.491 13:43:26 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:40.491 13:43:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:40.491 13:43:26 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:40.491 13:43:26 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:40.491 13:43:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:40.491 13:43:26 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2445434 00:35:40.491 13:43:26 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:40.491 13:43:26 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:40.491 13:43:26 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2445434 00:35:40.491 13:43:26 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2445434 ']' 00:35:40.491 13:43:26 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.491 13:43:26 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:40.491 13:43:26 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.491 13:43:26 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:40.491 13:43:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:40.491 [2024-12-06 13:43:26.971329] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:35:40.491 [2024-12-06 13:43:26.971395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:40.491 [2024-12-06 13:43:27.072413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:40.491 [2024-12-06 13:43:27.127993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:40.491 [2024-12-06 13:43:27.128046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:40.491 [2024-12-06 13:43:27.128055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:40.491 [2024-12-06 13:43:27.128062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:40.491 [2024-12-06 13:43:27.128069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:40.491 [2024-12-06 13:43:27.130123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.491 [2024-12-06 13:43:27.130282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:40.491 [2024-12-06 13:43:27.130444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.491 [2024-12-06 13:43:27.130443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:41.432 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:41.432 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:41.432 13:43:27 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:41.432 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.432 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:41.432 INFO: Log level set to 20 00:35:41.432 INFO: Requests: 00:35:41.432 { 00:35:41.432 "jsonrpc": "2.0", 00:35:41.432 "method": "nvmf_set_config", 00:35:41.432 "id": 1, 00:35:41.432 "params": { 00:35:41.432 "admin_cmd_passthru": { 00:35:41.432 "identify_ctrlr": true 00:35:41.432 } 00:35:41.432 } 00:35:41.432 } 00:35:41.432 00:35:41.432 INFO: response: 00:35:41.432 { 00:35:41.432 "jsonrpc": "2.0", 00:35:41.432 "id": 1, 00:35:41.432 "result": true 00:35:41.432 } 00:35:41.432 00:35:41.432 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.432 13:43:27 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:41.432 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.432 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:41.432 INFO: Setting log level to 20 00:35:41.432 INFO: Setting log level to 20 00:35:41.432 INFO: Log level set to 20 00:35:41.432 INFO: Log level set to 20 00:35:41.432 INFO: Requests: 00:35:41.432 { 00:35:41.432 "jsonrpc": "2.0", 00:35:41.432 "method": "framework_start_init", 00:35:41.432 "id": 1 00:35:41.432 } 00:35:41.432 00:35:41.432 INFO: Requests: 00:35:41.432 { 00:35:41.432 "jsonrpc": "2.0", 00:35:41.432 "method": "framework_start_init", 00:35:41.432 "id": 1 00:35:41.432 } 00:35:41.432 00:35:41.432 [2024-12-06 13:43:27.898308] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:41.432 INFO: response: 00:35:41.432 { 00:35:41.432 "jsonrpc": "2.0", 00:35:41.432 "id": 1, 00:35:41.432 "result": true 00:35:41.432 } 00:35:41.432 00:35:41.432 INFO: response: 00:35:41.432 { 00:35:41.432 "jsonrpc": "2.0", 00:35:41.432 "id": 1, 00:35:41.432 "result": true 00:35:41.432 } 00:35:41.432 00:35:41.432 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.432 13:43:27 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:41.432 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.432 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:41.432 INFO: Setting log level to 40 00:35:41.432 INFO: Setting log level to 40 00:35:41.432 INFO: Setting log level to 40 00:35:41.432 [2024-12-06 13:43:27.911875] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.432 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.432 13:43:27 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:41.433 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:41.433 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:41.433 13:43:27 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:41.433 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.433 13:43:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:41.692 Nvme0n1 00:35:41.692 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.693 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:41.693 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.693 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:41.693 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.693 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:41.693 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.693 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:41.693 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.693 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:41.693 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.693 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:41.693 [2024-12-06 13:43:28.314418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.693 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.693 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:41.693 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.693 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:41.693 [ 00:35:41.693 { 00:35:41.693 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:41.693 "subtype": "Discovery", 00:35:41.693 "listen_addresses": [], 00:35:41.693 "allow_any_host": true, 00:35:41.693 "hosts": [] 00:35:41.693 }, 00:35:41.693 { 00:35:41.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:41.693 "subtype": "NVMe", 00:35:41.693 "listen_addresses": [ 00:35:41.693 { 00:35:41.693 "trtype": "TCP", 00:35:41.693 "adrfam": "IPv4", 00:35:41.693 "traddr": "10.0.0.2", 00:35:41.693 "trsvcid": "4420" 00:35:41.693 } 00:35:41.693 ], 00:35:41.693 "allow_any_host": true, 00:35:41.693 "hosts": [], 00:35:41.693 "serial_number": "SPDK00000000000001", 00:35:41.693 "model_number": "SPDK bdev Controller", 00:35:41.693 "max_namespaces": 1, 00:35:41.693 "min_cntlid": 1, 00:35:41.693 "max_cntlid": 65519, 00:35:41.693 "namespaces": [ 00:35:41.693 { 00:35:41.693 "nsid": 1, 00:35:41.693 "bdev_name": "Nvme0n1", 00:35:41.693 "name": "Nvme0n1", 00:35:41.693 "nguid": "36344730526054870025384500000044", 00:35:41.693 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:41.693 } 00:35:41.693 ] 00:35:41.693 } 00:35:41.693 ] 00:35:41.693 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.693 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:41.693 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:41.693 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:41.953 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:41.953 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:41.953 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:41.953 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:42.214 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:42.214 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:42.214 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:42.214 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:42.214 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.214 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:42.214 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.214 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:42.214 13:43:28 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:42.214 13:43:28 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:42.214 13:43:28 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:42.214 13:43:28 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:42.214 13:43:28 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:42.214 13:43:28 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:42.214 13:43:28 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:42.214 rmmod nvme_tcp 00:35:42.214 rmmod nvme_fabrics 00:35:42.214 rmmod nvme_keyring 00:35:42.214 13:43:28 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:42.214 13:43:28 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:42.214 13:43:28 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:42.214 13:43:28 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2445434 ']' 00:35:42.214 13:43:28 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2445434 00:35:42.214 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2445434 ']' 00:35:42.214 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2445434 00:35:42.214 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:42.214 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:42.214 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2445434 00:35:42.214 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:42.214 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:42.214 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2445434' 00:35:42.214 killing process with pid 2445434 00:35:42.214 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2445434 00:35:42.214 13:43:28 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2445434 00:35:42.474 13:43:29 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:42.474 13:43:29 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:42.474 13:43:29 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:42.474 13:43:29 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:42.734 13:43:29 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:42.734 13:43:29 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:42.734 13:43:29 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:42.734 13:43:29 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:42.734 13:43:29 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:42.734 13:43:29 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:42.734 13:43:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:42.734 13:43:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.647 13:43:31 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:44.647 00:35:44.647 real 0m13.161s 00:35:44.647 user 0m10.204s 00:35:44.647 sys 0m6.734s 00:35:44.647 13:43:31 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:44.647 13:43:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:44.647 ************************************ 00:35:44.647 END TEST nvmf_identify_passthru 00:35:44.647 ************************************ 00:35:44.647 13:43:31 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:44.647 13:43:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:44.647 13:43:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:44.647 13:43:31 -- common/autotest_common.sh@10 -- # set +x 00:35:44.909 ************************************ 00:35:44.909 START TEST nvmf_dif 00:35:44.909 ************************************ 00:35:44.909 13:43:31 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:44.909 * Looking for test storage... 00:35:44.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:44.909 13:43:31 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:44.909 13:43:31 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:35:44.909 13:43:31 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:44.909 13:43:31 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:44.909 13:43:31 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:44.909 13:43:31 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:44.909 13:43:31 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:44.909 13:43:31 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:44.909 13:43:31 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:44.909 13:43:31 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:44.909 13:43:31 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:44.910 13:43:31 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:44.910 13:43:31 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:44.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.910 --rc genhtml_branch_coverage=1 00:35:44.910 --rc genhtml_function_coverage=1 00:35:44.910 --rc genhtml_legend=1 00:35:44.910 --rc geninfo_all_blocks=1 00:35:44.910 --rc geninfo_unexecuted_blocks=1 00:35:44.910 00:35:44.910 ' 00:35:44.910 13:43:31 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:44.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.910 --rc genhtml_branch_coverage=1 00:35:44.910 --rc genhtml_function_coverage=1 00:35:44.910 --rc genhtml_legend=1 00:35:44.910 --rc geninfo_all_blocks=1 00:35:44.910 --rc geninfo_unexecuted_blocks=1 00:35:44.910 00:35:44.910 ' 00:35:44.910 13:43:31 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:44.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.910 --rc genhtml_branch_coverage=1 00:35:44.910 --rc genhtml_function_coverage=1 00:35:44.910 --rc genhtml_legend=1 00:35:44.910 --rc geninfo_all_blocks=1 00:35:44.910 --rc geninfo_unexecuted_blocks=1 00:35:44.910 00:35:44.910 ' 00:35:44.910 13:43:31 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:44.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:44.910 --rc genhtml_branch_coverage=1 00:35:44.910 --rc genhtml_function_coverage=1 00:35:44.910 --rc genhtml_legend=1 00:35:44.910 --rc geninfo_all_blocks=1 00:35:44.910 --rc geninfo_unexecuted_blocks=1 00:35:44.910 00:35:44.910 ' 00:35:44.910 13:43:31 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:44.910 13:43:31 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:44.910 13:43:31 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.910 13:43:31 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.910 13:43:31 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.910 13:43:31 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:44.910 13:43:31 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:44.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:44.910 13:43:31 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:44.910 13:43:31 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:44.910 13:43:31 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:44.910 13:43:31 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:44.910 13:43:31 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.910 13:43:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:44.910 13:43:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:44.910 13:43:31 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:44.910 13:43:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:53.052 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:53.052 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:53.052 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:53.052 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:53.052 13:43:38 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:53.053 13:43:38 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:53.053 13:43:38 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:53.053 13:43:38 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:53.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:53.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:35:53.053 00:35:53.053 --- 10.0.0.2 ping statistics --- 00:35:53.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.053 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:35:53.053 13:43:38 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:53.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:53.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:35:53.053 00:35:53.053 --- 10.0.0.1 ping statistics --- 00:35:53.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.053 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:35:53.053 13:43:38 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:53.053 13:43:38 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:53.053 13:43:38 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:53.053 13:43:38 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:55.600 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:55.600 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:55.600 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:55.600 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:55.600 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:55.600 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:55.600 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:55.861 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:55.861 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:55.861 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:55.861 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:55.861 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:55.861 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:55.861 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:55.861 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:55.861 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:55.861 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:55.861 13:43:42 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:55.861 13:43:42 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:55.861 13:43:42 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:55.861 13:43:42 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:55.861 13:43:42 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:55.861 13:43:42 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:55.861 13:43:42 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:55.861 13:43:42 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:55.861 13:43:42 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:55.861 13:43:42 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:55.861 13:43:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:55.861 13:43:42 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2451523 00:35:55.861 13:43:42 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2451523 00:35:55.861 13:43:42 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:55.861 13:43:42 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2451523 ']' 00:35:55.861 13:43:42 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:55.861 13:43:42 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:55.861 13:43:42 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:55.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:55.861 13:43:42 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:55.861 13:43:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:55.861 [2024-12-06 13:43:42.507981] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:35:55.861 [2024-12-06 13:43:42.508046] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:56.121 [2024-12-06 13:43:42.604906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.121 [2024-12-06 13:43:42.641861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:56.121 [2024-12-06 13:43:42.641898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:56.121 [2024-12-06 13:43:42.641906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:56.121 [2024-12-06 13:43:42.641913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:56.121 [2024-12-06 13:43:42.641918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:56.121 [2024-12-06 13:43:42.642469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.690 13:43:43 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:56.690 13:43:43 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:35:56.690 13:43:43 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:56.690 13:43:43 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:56.690 13:43:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:56.690 13:43:43 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:56.690 13:43:43 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:56.690 13:43:43 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:56.690 13:43:43 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.690 13:43:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:56.690 [2024-12-06 13:43:43.340067] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.690 13:43:43 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.690 13:43:43 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:56.690 13:43:43 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:56.690 13:43:43 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.690 13:43:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:56.951 ************************************ 00:35:56.951 START TEST fio_dif_1_default 00:35:56.951 ************************************ 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:56.951 bdev_null0 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:56.951 [2024-12-06 13:43:43.424422] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:56.951 { 00:35:56.951 "params": { 00:35:56.951 "name": "Nvme$subsystem", 00:35:56.951 "trtype": "$TEST_TRANSPORT", 00:35:56.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:56.951 "adrfam": "ipv4", 00:35:56.951 "trsvcid": "$NVMF_PORT", 00:35:56.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:56.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:56.951 "hdgst": ${hdgst:-false}, 00:35:56.951 "ddgst": ${ddgst:-false} 00:35:56.951 }, 00:35:56.951 "method": "bdev_nvme_attach_controller" 00:35:56.951 } 00:35:56.951 EOF 00:35:56.951 )") 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:56.951 "params": { 00:35:56.951 "name": "Nvme0", 00:35:56.951 "trtype": "tcp", 00:35:56.951 "traddr": "10.0.0.2", 00:35:56.951 "adrfam": "ipv4", 00:35:56.951 "trsvcid": "4420", 00:35:56.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:56.951 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:56.951 "hdgst": false, 00:35:56.951 "ddgst": false 00:35:56.951 }, 00:35:56.951 "method": "bdev_nvme_attach_controller" 00:35:56.951 }' 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:56.951 13:43:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.245 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:57.245 fio-3.35 00:35:57.245 Starting 1 thread 00:36:09.468 00:36:09.468 filename0: (groupid=0, jobs=1): err= 0: pid=2452097: Fri Dec 6 13:43:54 2024 00:36:09.468 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10005msec) 00:36:09.468 slat (nsec): min=5503, max=36347, avg=6376.85, stdev=1746.57 00:36:09.468 clat (usec): min=40897, max=41486, avg=40984.40, stdev=32.90 00:36:09.468 lat (usec): min=40902, max=41522, avg=40990.77, stdev=33.54 00:36:09.468 clat percentiles (usec): 00:36:09.468 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:09.468 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:09.468 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:09.468 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:36:09.468 | 99.99th=[41681] 00:36:09.468 bw ( KiB/s): min= 384, max= 416, per=99.44%, avg=388.80, stdev=11.72, samples=20 00:36:09.468 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:09.468 lat (msec) : 50=100.00% 00:36:09.468 cpu : usr=93.40%, sys=6.39%, ctx=13, majf=0, minf=236 00:36:09.468 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:09.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.468 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.468 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:09.468 00:36:09.468 Run status group 0 (all jobs): 00:36:09.468 READ: bw=390KiB/s (400kB/s), 390KiB/s-390KiB/s (400kB/s-400kB/s), io=3904KiB (3998kB), run=10005-10005msec 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.468 00:36:09.468 real 0m11.302s 00:36:09.468 user 0m26.417s 00:36:09.468 sys 0m1.003s 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:09.468 ************************************ 00:36:09.468 END TEST fio_dif_1_default 00:36:09.468 ************************************ 00:36:09.468 13:43:54 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:09.468 13:43:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:09.468 13:43:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:09.468 13:43:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:09.468 ************************************ 00:36:09.468 START TEST fio_dif_1_multi_subsystems 00:36:09.468 ************************************ 00:36:09.468 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:09.469 bdev_null0 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:09.469 [2024-12-06 13:43:54.806234] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:09.469 bdev_null1 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:09.469 { 00:36:09.469 "params": { 00:36:09.469 "name": "Nvme$subsystem", 00:36:09.469 "trtype": "$TEST_TRANSPORT", 00:36:09.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:09.469 "adrfam": "ipv4", 00:36:09.469 "trsvcid": "$NVMF_PORT", 00:36:09.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:09.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:09.469 "hdgst": ${hdgst:-false}, 00:36:09.469 "ddgst": ${ddgst:-false} 00:36:09.469 }, 00:36:09.469 "method": "bdev_nvme_attach_controller" 00:36:09.469 } 00:36:09.469 EOF 00:36:09.469 )") 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:09.469 { 00:36:09.469 "params": { 00:36:09.469 "name": "Nvme$subsystem", 00:36:09.469 "trtype": "$TEST_TRANSPORT", 00:36:09.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:09.469 "adrfam": "ipv4", 00:36:09.469 "trsvcid": "$NVMF_PORT", 00:36:09.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:09.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:09.469 "hdgst": ${hdgst:-false}, 00:36:09.469 "ddgst": ${ddgst:-false} 00:36:09.469 }, 00:36:09.469 "method": "bdev_nvme_attach_controller" 00:36:09.469 } 00:36:09.469 EOF 00:36:09.469 )") 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:09.469 "params": { 00:36:09.469 "name": "Nvme0", 00:36:09.469 "trtype": "tcp", 00:36:09.469 "traddr": "10.0.0.2", 00:36:09.469 "adrfam": "ipv4", 00:36:09.469 "trsvcid": "4420", 00:36:09.469 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:09.469 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:09.469 "hdgst": false, 00:36:09.469 "ddgst": false 00:36:09.469 }, 00:36:09.469 "method": "bdev_nvme_attach_controller" 00:36:09.469 },{ 00:36:09.469 "params": { 00:36:09.469 "name": "Nvme1", 00:36:09.469 "trtype": "tcp", 00:36:09.469 "traddr": "10.0.0.2", 00:36:09.469 "adrfam": "ipv4", 00:36:09.469 "trsvcid": "4420", 00:36:09.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:09.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:09.469 "hdgst": false, 00:36:09.469 "ddgst": false 00:36:09.469 }, 00:36:09.469 "method": "bdev_nvme_attach_controller" 00:36:09.469 }' 00:36:09.469 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:09.470 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:09.470 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:09.470 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:09.470 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:09.470 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:09.470 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:09.470 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:09.470 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:09.470 13:43:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:09.470 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:09.470 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:09.470 fio-3.35 00:36:09.470 Starting 2 threads 00:36:19.591 00:36:19.591 filename0: (groupid=0, jobs=1): err= 0: pid=2454343: Fri Dec 6 13:44:06 2024 00:36:19.591 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10005msec) 00:36:19.591 slat (nsec): min=5505, max=45446, avg=6471.59, stdev=1892.31 00:36:19.591 clat (usec): min=40843, max=41683, avg=40984.94, stdev=51.39 00:36:19.591 lat (usec): min=40851, max=41711, avg=40991.41, stdev=51.76 00:36:19.591 clat percentiles (usec): 00:36:19.591 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:19.591 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:19.591 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:19.591 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:36:19.591 | 99.99th=[41681] 00:36:19.591 bw ( KiB/s): min= 384, max= 416, per=49.63%, avg=388.80, stdev=11.72, samples=20 00:36:19.591 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:19.591 lat (msec) : 50=100.00% 00:36:19.591 cpu : usr=95.11%, sys=4.68%, ctx=8, majf=0, minf=192 00:36:19.591 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.591 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.591 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:19.591 filename1: (groupid=0, jobs=1): err= 0: pid=2454344: Fri Dec 6 13:44:06 2024 00:36:19.591 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10007msec) 00:36:19.591 slat (nsec): min=5500, max=30894, avg=6490.06, stdev=1482.98 00:36:19.591 clat (usec): min=861, max=41965, avg=40826.61, stdev=2560.90 00:36:19.591 lat (usec): min=867, max=41974, avg=40833.10, stdev=2560.96 00:36:19.591 clat percentiles (usec): 00:36:19.591 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:19.591 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:19.591 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:19.591 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:36:19.591 | 99.99th=[42206] 00:36:19.591 bw ( KiB/s): min= 384, max= 416, per=49.88%, avg=390.40, stdev=13.13, samples=20 00:36:19.591 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:36:19.591 lat (usec) : 1000=0.41% 00:36:19.591 lat (msec) : 50=99.59% 00:36:19.591 cpu : usr=95.61%, sys=4.18%, ctx=16, majf=0, minf=82 00:36:19.591 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.591 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.591 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:19.591 00:36:19.591 Run status group 0 (all jobs): 00:36:19.591 READ: bw=782KiB/s (801kB/s), 390KiB/s-392KiB/s (400kB/s-401kB/s), io=7824KiB (8012kB), run=10005-10007msec 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.591 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:19.853 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.853 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:19.853 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.853 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:19.853 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.853 00:36:19.853 real 0m11.498s 00:36:19.853 user 0m32.187s 00:36:19.853 sys 0m1.273s 00:36:19.853 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:19.853 13:44:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:19.853 ************************************ 00:36:19.853 END TEST fio_dif_1_multi_subsystems 00:36:19.853 ************************************ 00:36:19.853 13:44:06 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:19.853 13:44:06 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:19.853 13:44:06 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:19.853 13:44:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:19.853 ************************************ 00:36:19.853 START TEST fio_dif_rand_params 00:36:19.853 ************************************ 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.853 bdev_null0 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:19.853 [2024-12-06 13:44:06.388601] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:19.853 13:44:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:19.853 { 00:36:19.853 "params": { 00:36:19.854 "name": "Nvme$subsystem", 00:36:19.854 "trtype": "$TEST_TRANSPORT", 00:36:19.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:19.854 "adrfam": "ipv4", 00:36:19.854 "trsvcid": "$NVMF_PORT", 00:36:19.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:19.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:19.854 "hdgst": ${hdgst:-false}, 00:36:19.854 "ddgst": ${ddgst:-false} 00:36:19.854 }, 00:36:19.854 "method": "bdev_nvme_attach_controller" 00:36:19.854 } 00:36:19.854 EOF 00:36:19.854 )") 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:19.854 "params": { 00:36:19.854 "name": "Nvme0", 00:36:19.854 "trtype": "tcp", 00:36:19.854 "traddr": "10.0.0.2", 00:36:19.854 "adrfam": "ipv4", 00:36:19.854 "trsvcid": "4420", 00:36:19.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:19.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:19.854 "hdgst": false, 00:36:19.854 "ddgst": false 00:36:19.854 }, 00:36:19.854 "method": "bdev_nvme_attach_controller" 00:36:19.854 }' 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:19.854 13:44:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:20.424 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:20.424 ... 00:36:20.424 fio-3.35 00:36:20.424 Starting 3 threads 00:36:27.012 00:36:27.012 filename0: (groupid=0, jobs=1): err= 0: pid=2456633: Fri Dec 6 13:44:12 2024 00:36:27.012 read: IOPS=319, BW=39.9MiB/s (41.9MB/s)(200MiB/5007msec) 00:36:27.012 slat (nsec): min=8068, max=32106, avg=8929.55, stdev=1800.91 00:36:27.012 clat (usec): min=4684, max=49826, avg=9379.37, stdev=4318.55 00:36:27.012 lat (usec): min=4693, max=49835, avg=9388.30, stdev=4318.54 00:36:27.012 clat percentiles (usec): 00:36:27.012 | 1.00th=[ 5407], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7963], 00:36:27.012 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:36:27.012 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:36:27.012 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49021], 99.95th=[50070], 00:36:27.012 | 99.99th=[50070] 00:36:27.012 bw ( KiB/s): min=30720, max=45056, per=34.66%, avg=40883.20, stdev=4120.90, samples=10 00:36:27.012 iops : min= 240, max= 352, avg=319.40, stdev=32.19, samples=10 00:36:27.012 lat (msec) : 10=87.93%, 20=10.94%, 50=1.13% 00:36:27.012 cpu : usr=93.93%, sys=5.81%, ctx=12, majf=0, minf=52 00:36:27.012 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.012 issued rwts: total=1599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.012 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:27.012 filename0: (groupid=0, jobs=1): err= 0: pid=2456634: Fri Dec 6 13:44:12 2024 00:36:27.012 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(184MiB/5044msec) 00:36:27.012 slat (nsec): min=5542, max=33154, avg=8605.80, stdev=1889.70 00:36:27.012 clat (usec): min=5529, max=89336, avg=10260.43, stdev=5426.62 00:36:27.012 lat (usec): min=5538, max=89345, avg=10269.04, stdev=5426.50 00:36:27.012 clat percentiles (usec): 00:36:27.012 | 1.00th=[ 6783], 5.00th=[ 7635], 10.00th=[ 8356], 20.00th=[ 8979], 00:36:27.012 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:36:27.012 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[11076], 00:36:27.012 | 99.00th=[48497], 99.50th=[49021], 99.90th=[50070], 99.95th=[89654], 00:36:27.012 | 99.99th=[89654] 00:36:27.012 bw ( KiB/s): min=28928, max=40192, per=31.84%, avg=37555.20, stdev=3578.00, samples=10 00:36:27.012 iops : min= 226, max= 314, avg=293.40, stdev=27.95, samples=10 00:36:27.012 lat (msec) : 10=66.98%, 20=31.31%, 50=1.50%, 100=0.20% 00:36:27.012 cpu : usr=94.15%, sys=5.61%, ctx=16, majf=0, minf=82 00:36:27.012 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.012 issued rwts: total=1469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.012 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:27.012 filename0: (groupid=0, jobs=1): err= 0: pid=2456636: Fri Dec 6 13:44:12 2024 00:36:27.012 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(198MiB/5044msec) 00:36:27.012 slat (nsec): min=5881, max=34065, avg=8824.03, stdev=1794.23 00:36:27.012 clat (usec): min=4642, max=51221, avg=9539.36, stdev=2431.91 00:36:27.012 lat (usec): min=4651, max=51230, avg=9548.18, stdev=2431.88 00:36:27.012 clat percentiles (usec): 00:36:27.012 | 1.00th=[ 6194], 5.00th=[ 7177], 10.00th=[ 7767], 20.00th=[ 8586], 00:36:27.012 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:36:27.012 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[11076], 00:36:27.012 | 99.00th=[11863], 99.50th=[12649], 99.90th=[47973], 99.95th=[51119], 00:36:27.012 | 99.99th=[51119] 00:36:27.012 bw ( KiB/s): min=37556, max=44800, per=34.24%, avg=40389.20, stdev=2437.74, samples=10 00:36:27.012 iops : min= 293, max= 350, avg=315.50, stdev=19.10, samples=10 00:36:27.012 lat (msec) : 10=67.66%, 20=32.03%, 50=0.25%, 100=0.06% 00:36:27.012 cpu : usr=94.77%, sys=5.00%, ctx=8, majf=0, minf=150 00:36:27.012 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.012 issued rwts: total=1580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.012 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:27.012 00:36:27.012 Run status group 0 (all jobs): 00:36:27.012 READ: bw=115MiB/s (121MB/s), 36.4MiB/s-39.9MiB/s (38.2MB/s-41.9MB/s), io=581MiB (609MB), run=5007-5044msec 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:27.012 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.013 bdev_null0 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.013 [2024-12-06 13:44:12.649280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.013 bdev_null1 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.013 bdev_null2 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:27.013 { 00:36:27.013 "params": { 00:36:27.013 "name": "Nvme$subsystem", 00:36:27.013 "trtype": "$TEST_TRANSPORT", 00:36:27.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:27.013 "adrfam": "ipv4", 00:36:27.013 "trsvcid": "$NVMF_PORT", 00:36:27.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:27.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:27.013 "hdgst": ${hdgst:-false}, 00:36:27.013 "ddgst": ${ddgst:-false} 00:36:27.013 }, 00:36:27.013 "method": "bdev_nvme_attach_controller" 00:36:27.013 } 00:36:27.013 EOF 00:36:27.013 )") 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:27.013 { 00:36:27.013 "params": { 00:36:27.013 "name": "Nvme$subsystem", 00:36:27.013 "trtype": "$TEST_TRANSPORT", 00:36:27.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:27.013 "adrfam": "ipv4", 00:36:27.013 "trsvcid": "$NVMF_PORT", 00:36:27.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:27.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:27.013 "hdgst": ${hdgst:-false}, 00:36:27.013 "ddgst": ${ddgst:-false} 00:36:27.013 }, 00:36:27.013 "method": "bdev_nvme_attach_controller" 00:36:27.013 } 00:36:27.013 EOF 00:36:27.013 )") 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:27.013 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:27.013 { 00:36:27.013 "params": { 00:36:27.013 "name": "Nvme$subsystem", 00:36:27.013 "trtype": "$TEST_TRANSPORT", 00:36:27.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:27.013 "adrfam": "ipv4", 00:36:27.013 "trsvcid": "$NVMF_PORT", 00:36:27.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:27.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:27.013 "hdgst": ${hdgst:-false}, 00:36:27.013 "ddgst": ${ddgst:-false} 00:36:27.013 }, 00:36:27.013 "method": "bdev_nvme_attach_controller" 00:36:27.013 } 00:36:27.013 EOF 00:36:27.013 )") 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:27.014 "params": { 00:36:27.014 "name": "Nvme0", 00:36:27.014 "trtype": "tcp", 00:36:27.014 "traddr": "10.0.0.2", 00:36:27.014 "adrfam": "ipv4", 00:36:27.014 "trsvcid": "4420", 00:36:27.014 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:27.014 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:27.014 "hdgst": false, 00:36:27.014 "ddgst": false 00:36:27.014 }, 00:36:27.014 "method": "bdev_nvme_attach_controller" 00:36:27.014 },{ 00:36:27.014 "params": { 00:36:27.014 "name": "Nvme1", 00:36:27.014 "trtype": "tcp", 00:36:27.014 "traddr": "10.0.0.2", 00:36:27.014 "adrfam": "ipv4", 00:36:27.014 "trsvcid": "4420", 00:36:27.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:27.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:27.014 "hdgst": false, 00:36:27.014 "ddgst": false 00:36:27.014 }, 00:36:27.014 "method": "bdev_nvme_attach_controller" 00:36:27.014 },{ 00:36:27.014 "params": { 00:36:27.014 "name": "Nvme2", 00:36:27.014 "trtype": "tcp", 00:36:27.014 "traddr": "10.0.0.2", 00:36:27.014 "adrfam": "ipv4", 00:36:27.014 "trsvcid": "4420", 00:36:27.014 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:27.014 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:27.014 "hdgst": false, 00:36:27.014 "ddgst": false 00:36:27.014 }, 00:36:27.014 "method": "bdev_nvme_attach_controller" 00:36:27.014 }' 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:27.014 13:44:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:27.014 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:27.014 ... 00:36:27.014 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:27.014 ... 00:36:27.014 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:27.014 ... 00:36:27.014 fio-3.35 00:36:27.014 Starting 24 threads 00:36:39.259 00:36:39.259 filename0: (groupid=0, jobs=1): err= 0: pid=2458054: Fri Dec 6 13:44:24 2024 00:36:39.259 read: IOPS=808, BW=3236KiB/s (3313kB/s)(31.7MiB/10021msec) 00:36:39.259 slat (nsec): min=5673, max=71678, avg=7096.05, stdev=3276.35 00:36:39.259 clat (usec): min=991, max=36176, avg=19720.43, stdev=5172.78 00:36:39.259 lat (usec): min=1039, max=36193, avg=19727.53, stdev=5171.87 00:36:39.259 clat percentiles (usec): 00:36:39.259 | 1.00th=[ 1319], 5.00th=[12125], 10.00th=[15270], 20.00th=[16188], 00:36:39.259 | 30.00th=[16712], 40.00th=[17171], 50.00th=[19792], 60.00th=[23725], 00:36:39.259 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:36:39.259 | 99.00th=[26084], 99.50th=[29230], 99.90th=[31851], 99.95th=[33817], 00:36:39.259 | 99.99th=[36439] 00:36:39.259 bw ( KiB/s): min= 2560, max= 4608, per=5.05%, avg=3237.60, stdev=576.50, samples=20 00:36:39.259 iops : min= 640, max= 1152, avg=809.40, stdev=144.13, samples=20 00:36:39.259 lat (usec) : 1000=0.01% 00:36:39.259 lat (msec) : 2=2.33%, 4=0.39%, 10=0.84%, 20=46.60%, 50=49.83% 00:36:39.259 cpu : usr=98.49%, sys=1.08%, ctx=69, majf=0, minf=38 00:36:39.259 IO depths : 1=2.8%, 2=5.8%, 4=15.0%, 8=66.4%, 16=10.0%, 32=0.0%, >=64=0.0% 00:36:39.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.259 complete : 0=0.0%, 4=91.3%, 8=3.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.259 issued rwts: total=8106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.259 filename0: (groupid=0, jobs=1): err= 0: pid=2458055: Fri Dec 6 13:44:24 2024 00:36:39.259 read: IOPS=660, BW=2641KiB/s (2704kB/s)(25.8MiB/10009msec) 00:36:39.259 slat (usec): min=5, max=105, avg=13.53, stdev=10.24 00:36:39.259 clat (usec): min=8144, max=33126, avg=24120.23, stdev=2015.39 00:36:39.259 lat (usec): min=8190, max=33135, avg=24133.77, stdev=2013.44 00:36:39.259 clat percentiles (usec): 00:36:39.259 | 1.00th=[14484], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:39.259 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:36:39.259 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25822], 00:36:39.259 | 99.00th=[26084], 99.50th=[26346], 99.90th=[26870], 99.95th=[31065], 00:36:39.259 | 99.99th=[33162] 00:36:39.259 bw ( KiB/s): min= 2554, max= 2944, per=4.13%, avg=2646.95, stdev=96.53, samples=19 00:36:39.259 iops : min= 638, max= 736, avg=661.68, stdev=24.19, samples=19 00:36:39.259 lat (msec) : 10=0.53%, 20=2.92%, 50=96.55% 00:36:39.259 cpu : usr=98.84%, sys=0.89%, ctx=12, majf=0, minf=23 00:36:39.259 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:39.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.259 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.259 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.259 filename0: (groupid=0, jobs=1): err= 0: pid=2458056: Fri Dec 6 13:44:24 2024 00:36:39.259 read: IOPS=652, BW=2609KiB/s (2672kB/s)(25.5MiB/10008msec) 00:36:39.259 slat (nsec): min=5688, max=75134, avg=14072.38, stdev=9558.81 00:36:39.259 clat (usec): min=13955, max=35530, avg=24412.13, stdev=1294.25 00:36:39.259 lat (usec): min=13963, max=35537, avg=24426.21, stdev=1293.94 00:36:39.259 clat percentiles (usec): 00:36:39.259 | 1.00th=[18482], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:39.259 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:36:39.259 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:36:39.259 | 99.00th=[26608], 99.50th=[30802], 99.90th=[33817], 99.95th=[34341], 00:36:39.259 | 99.99th=[35390] 00:36:39.259 bw ( KiB/s): min= 2554, max= 2688, per=4.07%, avg=2606.47, stdev=63.35, samples=19 00:36:39.259 iops : min= 638, max= 672, avg=651.53, stdev=15.86, samples=19 00:36:39.259 lat (msec) : 20=1.16%, 50=98.84% 00:36:39.259 cpu : usr=98.51%, sys=1.08%, ctx=103, majf=0, minf=20 00:36:39.259 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:39.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.259 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.259 issued rwts: total=6528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.259 filename0: (groupid=0, jobs=1): err= 0: pid=2458057: Fri Dec 6 13:44:24 2024 00:36:39.259 read: IOPS=672, BW=2691KiB/s (2755kB/s)(26.3MiB/10005msec) 00:36:39.259 slat (nsec): min=5607, max=55373, avg=10961.00, stdev=7669.40 00:36:39.259 clat (usec): min=5701, max=43721, avg=23715.90, stdev=3378.32 00:36:39.259 lat (usec): min=5707, max=43736, avg=23726.86, stdev=3379.38 00:36:39.259 clat percentiles (usec): 00:36:39.259 | 1.00th=[13829], 5.00th=[16450], 10.00th=[19006], 20.00th=[23462], 00:36:39.259 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:36:39.259 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25822], 95.00th=[26608], 00:36:39.259 | 99.00th=[33162], 99.50th=[36439], 99.90th=[43779], 99.95th=[43779], 00:36:39.259 | 99.99th=[43779] 00:36:39.259 bw ( KiB/s): min= 2484, max= 2944, per=4.20%, avg=2691.26, stdev=113.07, samples=19 00:36:39.259 iops : min= 621, max= 736, avg=672.79, stdev=28.30, samples=19 00:36:39.259 lat (msec) : 10=0.31%, 20=11.01%, 50=88.68% 00:36:39.259 cpu : usr=98.98%, sys=0.75%, ctx=14, majf=0, minf=38 00:36:39.259 IO depths : 1=2.0%, 2=4.8%, 4=12.2%, 8=68.1%, 16=12.8%, 32=0.0%, >=64=0.0% 00:36:39.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.259 complete : 0=0.0%, 4=91.3%, 8=5.2%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.259 issued rwts: total=6730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.259 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.259 filename0: (groupid=0, jobs=1): err= 0: pid=2458058: Fri Dec 6 13:44:24 2024 00:36:39.259 read: IOPS=717, BW=2869KiB/s (2938kB/s)(28.1MiB/10021msec) 00:36:39.259 slat (usec): min=5, max=128, avg=10.02, stdev= 7.97 00:36:39.259 clat (usec): min=9700, max=44579, avg=22235.08, stdev=5021.01 00:36:39.259 lat (usec): min=9718, max=44585, avg=22245.10, stdev=5022.25 00:36:39.259 clat percentiles (usec): 00:36:39.259 | 1.00th=[12518], 5.00th=[15270], 10.00th=[15926], 20.00th=[17171], 00:36:39.260 | 30.00th=[18744], 40.00th=[21103], 50.00th=[23725], 60.00th=[23987], 00:36:39.260 | 70.00th=[24249], 80.00th=[25035], 90.00th=[26608], 95.00th=[31589], 00:36:39.260 | 99.00th=[38011], 99.50th=[39584], 99.90th=[41157], 99.95th=[44303], 00:36:39.260 | 99.99th=[44827] 00:36:39.260 bw ( KiB/s): min= 2536, max= 3168, per=4.48%, avg=2869.35, stdev=159.65, samples=20 00:36:39.260 iops : min= 634, max= 792, avg=717.30, stdev=39.89, samples=20 00:36:39.260 lat (msec) : 10=0.10%, 20=34.40%, 50=65.51% 00:36:39.260 cpu : usr=98.41%, sys=1.16%, ctx=81, majf=0, minf=35 00:36:39.260 IO depths : 1=1.2%, 2=2.5%, 4=10.4%, 8=73.9%, 16=11.9%, 32=0.0%, >=64=0.0% 00:36:39.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 issued rwts: total=7187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.260 filename0: (groupid=0, jobs=1): err= 0: pid=2458059: Fri Dec 6 13:44:24 2024 00:36:39.260 read: IOPS=655, BW=2622KiB/s (2685kB/s)(25.6MiB/10007msec) 00:36:39.260 slat (nsec): min=4812, max=90038, avg=20959.92, stdev=13103.66 00:36:39.260 clat (usec): min=7343, max=42214, avg=24217.03, stdev=2344.58 00:36:39.260 lat (usec): min=7349, max=42229, avg=24237.99, stdev=2345.16 00:36:39.260 clat percentiles (usec): 00:36:39.260 | 1.00th=[15401], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:36:39.260 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:36:39.260 | 70.00th=[24511], 80.00th=[25035], 90.00th=[25560], 95.00th=[26084], 00:36:39.260 | 99.00th=[31851], 99.50th=[33817], 99.90th=[42206], 99.95th=[42206], 00:36:39.260 | 99.99th=[42206] 00:36:39.260 bw ( KiB/s): min= 2432, max= 2688, per=4.05%, avg=2595.58, stdev=72.67, samples=19 00:36:39.260 iops : min= 608, max= 672, avg=648.84, stdev=18.20, samples=19 00:36:39.260 lat (msec) : 10=0.06%, 20=3.87%, 50=96.07% 00:36:39.260 cpu : usr=98.94%, sys=0.78%, ctx=14, majf=0, minf=25 00:36:39.260 IO depths : 1=5.2%, 2=10.6%, 4=22.7%, 8=54.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:39.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.260 filename0: (groupid=0, jobs=1): err= 0: pid=2458060: Fri Dec 6 13:44:24 2024 00:36:39.260 read: IOPS=657, BW=2629KiB/s (2692kB/s)(25.7MiB/10009msec) 00:36:39.260 slat (nsec): min=5689, max=75542, avg=15947.98, stdev=11327.34 00:36:39.260 clat (usec): min=6593, max=37237, avg=24204.11, stdev=1833.06 00:36:39.260 lat (usec): min=6607, max=37245, avg=24220.06, stdev=1831.85 00:36:39.260 clat percentiles (usec): 00:36:39.260 | 1.00th=[14615], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:39.260 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:36:39.260 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:36:39.260 | 99.00th=[26346], 99.50th=[26346], 99.90th=[31589], 99.95th=[36963], 00:36:39.260 | 99.99th=[37487] 00:36:39.260 bw ( KiB/s): min= 2427, max= 3088, per=4.11%, avg=2634.37, stdev=133.85, samples=19 00:36:39.260 iops : min= 606, max= 772, avg=658.53, stdev=33.54, samples=19 00:36:39.260 lat (msec) : 10=0.47%, 20=1.72%, 50=97.81% 00:36:39.260 cpu : usr=98.92%, sys=0.80%, ctx=14, majf=0, minf=32 00:36:39.260 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:39.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 issued rwts: total=6578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.260 filename0: (groupid=0, jobs=1): err= 0: pid=2458061: Fri Dec 6 13:44:24 2024 00:36:39.260 read: IOPS=651, BW=2605KiB/s (2668kB/s)(25.5MiB/10007msec) 00:36:39.260 slat (nsec): min=5684, max=76664, avg=19665.11, stdev=12212.31 00:36:39.260 clat (usec): min=10291, max=38868, avg=24386.08, stdev=1511.51 00:36:39.260 lat (usec): min=10297, max=38927, avg=24405.75, stdev=1511.37 00:36:39.260 clat percentiles (usec): 00:36:39.260 | 1.00th=[17957], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:39.260 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:36:39.260 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:36:39.260 | 99.00th=[30016], 99.50th=[32375], 99.90th=[38536], 99.95th=[39060], 00:36:39.260 | 99.99th=[39060] 00:36:39.260 bw ( KiB/s): min= 2554, max= 2688, per=4.06%, avg=2601.95, stdev=61.17, samples=19 00:36:39.260 iops : min= 638, max= 672, avg=650.37, stdev=15.39, samples=19 00:36:39.260 lat (msec) : 20=1.49%, 50=98.51% 00:36:39.260 cpu : usr=98.26%, sys=1.12%, ctx=158, majf=0, minf=25 00:36:39.260 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:39.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 issued rwts: total=6518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.260 filename1: (groupid=0, jobs=1): err= 0: pid=2458062: Fri Dec 6 13:44:24 2024 00:36:39.260 read: IOPS=650, BW=2601KiB/s (2663kB/s)(25.4MiB/10004msec) 00:36:39.260 slat (nsec): min=4787, max=88226, avg=22114.14, stdev=14715.12 00:36:39.260 clat (usec): min=7335, max=44442, avg=24399.48, stdev=2449.07 00:36:39.260 lat (usec): min=7341, max=44457, avg=24421.59, stdev=2448.81 00:36:39.260 clat percentiles (usec): 00:36:39.260 | 1.00th=[16319], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:36:39.260 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:36:39.260 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26084], 00:36:39.260 | 99.00th=[33424], 99.50th=[34866], 99.90th=[44303], 99.95th=[44303], 00:36:39.260 | 99.99th=[44303] 00:36:39.260 bw ( KiB/s): min= 2432, max= 2880, per=4.05%, avg=2596.74, stdev=111.62, samples=19 00:36:39.260 iops : min= 608, max= 720, avg=649.16, stdev=27.92, samples=19 00:36:39.260 lat (msec) : 10=0.11%, 20=2.91%, 50=96.99% 00:36:39.260 cpu : usr=98.99%, sys=0.72%, ctx=20, majf=0, minf=24 00:36:39.260 IO depths : 1=5.8%, 2=11.8%, 4=24.1%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:39.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 issued rwts: total=6504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.260 filename1: (groupid=0, jobs=1): err= 0: pid=2458063: Fri Dec 6 13:44:24 2024 00:36:39.260 read: IOPS=677, BW=2711KiB/s (2776kB/s)(26.5MiB/10009msec) 00:36:39.260 slat (usec): min=5, max=144, avg=10.40, stdev= 7.99 00:36:39.260 clat (usec): min=8124, max=31308, avg=23517.84, stdev=2796.53 00:36:39.260 lat (usec): min=8187, max=31333, avg=23528.24, stdev=2795.77 00:36:39.260 clat percentiles (usec): 00:36:39.260 | 1.00th=[14222], 5.00th=[16188], 10.00th=[18482], 20.00th=[23725], 00:36:39.260 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:36:39.260 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:36:39.260 | 99.00th=[26084], 99.50th=[26346], 99.90th=[26346], 99.95th=[26346], 00:36:39.260 | 99.99th=[31327] 00:36:39.260 bw ( KiB/s): min= 2432, max= 2944, per=4.21%, avg=2694.11, stdev=123.91, samples=19 00:36:39.260 iops : min= 608, max= 736, avg=673.47, stdev=30.95, samples=19 00:36:39.260 lat (msec) : 10=0.53%, 20=10.58%, 50=88.89% 00:36:39.260 cpu : usr=98.76%, sys=0.79%, ctx=115, majf=0, minf=49 00:36:39.260 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:39.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 issued rwts: total=6784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.260 filename1: (groupid=0, jobs=1): err= 0: pid=2458064: Fri Dec 6 13:44:24 2024 00:36:39.260 read: IOPS=673, BW=2694KiB/s (2759kB/s)(26.3MiB/10015msec) 00:36:39.260 slat (nsec): min=5665, max=86732, avg=16397.34, stdev=13567.79 00:36:39.260 clat (usec): min=11521, max=41539, avg=23624.15, stdev=4330.46 00:36:39.260 lat (usec): min=11527, max=41553, avg=23640.54, stdev=4331.75 00:36:39.260 clat percentiles (usec): 00:36:39.260 | 1.00th=[14353], 5.00th=[15926], 10.00th=[17171], 20.00th=[20579], 00:36:39.260 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:36:39.260 | 70.00th=[24511], 80.00th=[25297], 90.00th=[27657], 95.00th=[32375], 00:36:39.260 | 99.00th=[37487], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:36:39.260 | 99.99th=[41681] 00:36:39.260 bw ( KiB/s): min= 2480, max= 2848, per=4.19%, avg=2682.42, stdev=108.51, samples=19 00:36:39.260 iops : min= 620, max= 712, avg=670.53, stdev=27.12, samples=19 00:36:39.260 lat (msec) : 20=18.15%, 50=81.85% 00:36:39.260 cpu : usr=98.82%, sys=0.91%, ctx=13, majf=0, minf=25 00:36:39.260 IO depths : 1=2.3%, 2=4.6%, 4=12.0%, 8=69.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:36:39.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 complete : 0=0.0%, 4=90.8%, 8=4.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.260 issued rwts: total=6745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.260 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.260 filename1: (groupid=0, jobs=1): err= 0: pid=2458065: Fri Dec 6 13:44:24 2024 00:36:39.260 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.7MiB/10045msec) 00:36:39.260 slat (nsec): min=5669, max=65055, avg=13060.65, stdev=9624.76 00:36:39.260 clat (usec): min=11272, max=51219, avg=24294.95, stdev=2902.61 00:36:39.260 lat (usec): min=11295, max=51226, avg=24308.01, stdev=2902.78 00:36:39.260 clat percentiles (usec): 00:36:39.260 | 1.00th=[16450], 5.00th=[19006], 10.00th=[21365], 20.00th=[23725], 00:36:39.260 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:36:39.260 | 70.00th=[24773], 80.00th=[25297], 90.00th=[25822], 95.00th=[27657], 00:36:39.260 | 99.00th=[33817], 99.50th=[38011], 99.90th=[46400], 99.95th=[46400], 00:36:39.260 | 99.99th=[51119] 00:36:39.260 bw ( KiB/s): min= 2452, max= 2736, per=4.10%, avg=2624.15, stdev=64.12, samples=20 00:36:39.260 iops : min= 613, max= 684, avg=656.00, stdev=16.06, samples=20 00:36:39.260 lat (msec) : 20=7.30%, 50=92.65%, 100=0.05% 00:36:39.260 cpu : usr=98.69%, sys=1.03%, ctx=61, majf=0, minf=25 00:36:39.260 IO depths : 1=0.4%, 2=0.8%, 4=2.7%, 8=79.2%, 16=16.9%, 32=0.0%, >=64=0.0% 00:36:39.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 complete : 0=0.0%, 4=89.5%, 8=9.2%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 issued rwts: total=6588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.261 filename1: (groupid=0, jobs=1): err= 0: pid=2458066: Fri Dec 6 13:44:24 2024 00:36:39.261 read: IOPS=657, BW=2629KiB/s (2692kB/s)(25.7MiB/10014msec) 00:36:39.261 slat (nsec): min=5730, max=79070, avg=11862.45, stdev=7391.78 00:36:39.261 clat (usec): min=7928, max=27662, avg=24244.97, stdev=1838.08 00:36:39.261 lat (usec): min=7988, max=27670, avg=24256.83, stdev=1837.02 00:36:39.261 clat percentiles (usec): 00:36:39.261 | 1.00th=[11600], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:39.261 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:36:39.261 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:36:39.261 | 99.00th=[26346], 99.50th=[26608], 99.90th=[26870], 99.95th=[27657], 00:36:39.261 | 99.99th=[27657] 00:36:39.261 bw ( KiB/s): min= 2432, max= 2997, per=4.10%, avg=2626.05, stdev=114.99, samples=20 00:36:39.261 iops : min= 608, max= 749, avg=656.45, stdev=28.74, samples=20 00:36:39.261 lat (msec) : 10=0.73%, 20=1.06%, 50=98.21% 00:36:39.261 cpu : usr=98.51%, sys=1.10%, ctx=67, majf=0, minf=32 00:36:39.261 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:39.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 issued rwts: total=6582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.261 filename1: (groupid=0, jobs=1): err= 0: pid=2458067: Fri Dec 6 13:44:24 2024 00:36:39.261 read: IOPS=703, BW=2813KiB/s (2880kB/s)(27.5MiB/10015msec) 00:36:39.261 slat (usec): min=5, max=102, avg=15.72, stdev=13.41 00:36:39.261 clat (usec): min=9540, max=41554, avg=22633.68, stdev=4611.10 00:36:39.261 lat (usec): min=9558, max=41571, avg=22649.40, stdev=4613.62 00:36:39.261 clat percentiles (usec): 00:36:39.261 | 1.00th=[14091], 5.00th=[15664], 10.00th=[16188], 20.00th=[17695], 00:36:39.261 | 30.00th=[20579], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:39.261 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25822], 95.00th=[31065], 00:36:39.261 | 99.00th=[38536], 99.50th=[39060], 99.90th=[41157], 99.95th=[41681], 00:36:39.261 | 99.99th=[41681] 00:36:39.261 bw ( KiB/s): min= 2560, max= 3056, per=4.39%, avg=2812.15, stdev=145.62, samples=20 00:36:39.261 iops : min= 640, max= 764, avg=703.00, stdev=36.41, samples=20 00:36:39.261 lat (msec) : 10=0.10%, 20=27.38%, 50=72.52% 00:36:39.261 cpu : usr=98.61%, sys=1.07%, ctx=61, majf=0, minf=28 00:36:39.261 IO depths : 1=2.5%, 2=5.0%, 4=13.6%, 8=68.5%, 16=10.5%, 32=0.0%, >=64=0.0% 00:36:39.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 complete : 0=0.0%, 4=90.9%, 8=3.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 issued rwts: total=7042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.261 filename1: (groupid=0, jobs=1): err= 0: pid=2458068: Fri Dec 6 13:44:24 2024 00:36:39.261 read: IOPS=662, BW=2651KiB/s (2714kB/s)(25.9MiB/10004msec) 00:36:39.261 slat (nsec): min=5669, max=92348, avg=16248.32, stdev=13154.59 00:36:39.261 clat (usec): min=5640, max=46189, avg=24042.26, stdev=4496.06 00:36:39.261 lat (usec): min=5646, max=46200, avg=24058.51, stdev=4496.77 00:36:39.261 clat percentiles (usec): 00:36:39.261 | 1.00th=[14484], 5.00th=[16450], 10.00th=[17695], 20.00th=[21103], 00:36:39.261 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:36:39.261 | 70.00th=[25035], 80.00th=[25560], 90.00th=[29230], 95.00th=[32375], 00:36:39.261 | 99.00th=[38536], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:36:39.261 | 99.99th=[46400] 00:36:39.261 bw ( KiB/s): min= 2432, max= 2928, per=4.13%, avg=2644.53, stdev=148.76, samples=19 00:36:39.261 iops : min= 608, max= 732, avg=661.11, stdev=37.21, samples=19 00:36:39.261 lat (msec) : 10=0.38%, 20=15.49%, 50=84.13% 00:36:39.261 cpu : usr=98.84%, sys=0.86%, ctx=37, majf=0, minf=38 00:36:39.261 IO depths : 1=1.4%, 2=3.0%, 4=9.3%, 8=73.3%, 16=13.0%, 32=0.0%, >=64=0.0% 00:36:39.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 complete : 0=0.0%, 4=89.9%, 8=6.3%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 issued rwts: total=6629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.261 filename1: (groupid=0, jobs=1): err= 0: pid=2458069: Fri Dec 6 13:44:24 2024 00:36:39.261 read: IOPS=649, BW=2599KiB/s (2661kB/s)(25.5MiB/10048msec) 00:36:39.261 slat (nsec): min=5718, max=80842, avg=19152.63, stdev=13415.82 00:36:39.261 clat (usec): min=12157, max=52951, avg=24390.01, stdev=1335.97 00:36:39.261 lat (usec): min=12163, max=52957, avg=24409.16, stdev=1334.45 00:36:39.261 clat percentiles (usec): 00:36:39.261 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:39.261 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:36:39.261 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25822], 00:36:39.261 | 99.00th=[26346], 99.50th=[26608], 99.90th=[31065], 99.95th=[52691], 00:36:39.261 | 99.99th=[52691] 00:36:39.261 bw ( KiB/s): min= 2554, max= 2688, per=4.07%, avg=2605.89, stdev=63.13, samples=19 00:36:39.261 iops : min= 638, max= 672, avg=651.37, stdev=15.76, samples=19 00:36:39.261 lat (msec) : 20=0.46%, 50=99.45%, 100=0.09% 00:36:39.261 cpu : usr=98.51%, sys=1.05%, ctx=55, majf=0, minf=30 00:36:39.261 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:39.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 issued rwts: total=6528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.261 filename2: (groupid=0, jobs=1): err= 0: pid=2458070: Fri Dec 6 13:44:24 2024 00:36:39.261 read: IOPS=650, BW=2602KiB/s (2665kB/s)(25.4MiB/10007msec) 00:36:39.261 slat (nsec): min=5677, max=80795, avg=17885.77, stdev=12427.95 00:36:39.261 clat (usec): min=11123, max=39088, avg=24413.11, stdev=1680.91 00:36:39.261 lat (usec): min=11129, max=39102, avg=24431.00, stdev=1679.78 00:36:39.261 clat percentiles (usec): 00:36:39.261 | 1.00th=[18744], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:39.261 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:36:39.261 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25822], 00:36:39.261 | 99.00th=[32113], 99.50th=[34341], 99.90th=[38536], 99.95th=[39060], 00:36:39.261 | 99.99th=[39060] 00:36:39.261 bw ( KiB/s): min= 2480, max= 2688, per=4.06%, avg=2600.26, stdev=68.23, samples=19 00:36:39.261 iops : min= 620, max= 672, avg=649.95, stdev=17.09, samples=19 00:36:39.261 lat (msec) : 20=1.51%, 50=98.49% 00:36:39.261 cpu : usr=98.71%, sys=0.97%, ctx=82, majf=0, minf=21 00:36:39.261 IO depths : 1=5.9%, 2=11.8%, 4=24.2%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:39.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 issued rwts: total=6510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.261 filename2: (groupid=0, jobs=1): err= 0: pid=2458071: Fri Dec 6 13:44:24 2024 00:36:39.261 read: IOPS=673, BW=2695KiB/s (2760kB/s)(26.3MiB/10005msec) 00:36:39.261 slat (nsec): min=4732, max=85661, avg=17732.28, stdev=12796.41 00:36:39.261 clat (usec): min=6878, max=51617, avg=23601.82, stdev=3833.07 00:36:39.261 lat (usec): min=6907, max=51630, avg=23619.56, stdev=3834.96 00:36:39.261 clat percentiles (usec): 00:36:39.261 | 1.00th=[14746], 5.00th=[16319], 10.00th=[17695], 20.00th=[22676], 00:36:39.261 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:36:39.261 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25560], 95.00th=[28443], 00:36:39.261 | 99.00th=[38011], 99.50th=[39060], 99.90th=[44827], 99.95th=[46924], 00:36:39.261 | 99.99th=[51643] 00:36:39.261 bw ( KiB/s): min= 2432, max= 2864, per=4.19%, avg=2685.16, stdev=116.77, samples=19 00:36:39.261 iops : min= 608, max= 716, avg=671.26, stdev=29.18, samples=19 00:36:39.261 lat (msec) : 10=0.24%, 20=13.65%, 50=86.09%, 100=0.03% 00:36:39.261 cpu : usr=98.44%, sys=1.22%, ctx=136, majf=0, minf=23 00:36:39.261 IO depths : 1=3.1%, 2=7.3%, 4=18.5%, 8=61.3%, 16=9.9%, 32=0.0%, >=64=0.0% 00:36:39.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 complete : 0=0.0%, 4=92.4%, 8=2.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 issued rwts: total=6742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.261 filename2: (groupid=0, jobs=1): err= 0: pid=2458072: Fri Dec 6 13:44:24 2024 00:36:39.261 read: IOPS=653, BW=2615KiB/s (2678kB/s)(25.6MiB/10009msec) 00:36:39.261 slat (usec): min=5, max=161, avg=15.68, stdev=11.00 00:36:39.261 clat (usec): min=9579, max=28106, avg=24340.22, stdev=1268.36 00:36:39.261 lat (usec): min=9596, max=28138, avg=24355.90, stdev=1267.16 00:36:39.261 clat percentiles (usec): 00:36:39.261 | 1.00th=[22414], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:39.261 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:36:39.261 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25822], 00:36:39.261 | 99.00th=[26084], 99.50th=[26608], 99.90th=[26870], 99.95th=[26870], 00:36:39.261 | 99.99th=[28181] 00:36:39.261 bw ( KiB/s): min= 2554, max= 2816, per=4.09%, avg=2620.32, stdev=78.58, samples=19 00:36:39.261 iops : min= 638, max= 704, avg=655.05, stdev=19.67, samples=19 00:36:39.261 lat (msec) : 10=0.24%, 20=0.49%, 50=99.27% 00:36:39.261 cpu : usr=98.78%, sys=0.86%, ctx=68, majf=0, minf=28 00:36:39.261 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:39.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.261 issued rwts: total=6544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.261 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.261 filename2: (groupid=0, jobs=1): err= 0: pid=2458073: Fri Dec 6 13:44:24 2024 00:36:39.261 read: IOPS=660, BW=2642KiB/s (2706kB/s)(25.8MiB/10004msec) 00:36:39.261 slat (nsec): min=5683, max=91230, avg=19788.53, stdev=13068.71 00:36:39.261 clat (usec): min=3340, max=50338, avg=24054.62, stdev=3067.41 00:36:39.261 lat (usec): min=3347, max=50354, avg=24074.41, stdev=3068.34 00:36:39.261 clat percentiles (usec): 00:36:39.261 | 1.00th=[15008], 5.00th=[17695], 10.00th=[23200], 20.00th=[23462], 00:36:39.261 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:36:39.261 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26346], 00:36:39.261 | 99.00th=[34866], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:36:39.262 | 99.99th=[50594] 00:36:39.262 bw ( KiB/s): min= 2436, max= 2906, per=4.12%, avg=2638.21, stdev=106.31, samples=19 00:36:39.262 iops : min= 609, max= 726, avg=659.53, stdev=26.51, samples=19 00:36:39.262 lat (msec) : 4=0.09%, 10=0.30%, 20=6.52%, 50=93.04%, 100=0.05% 00:36:39.262 cpu : usr=98.43%, sys=1.12%, ctx=164, majf=0, minf=28 00:36:39.262 IO depths : 1=3.7%, 2=8.6%, 4=20.8%, 8=57.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:36:39.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.262 complete : 0=0.0%, 4=93.1%, 8=1.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.262 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.262 filename2: (groupid=0, jobs=1): err= 0: pid=2458074: Fri Dec 6 13:44:24 2024 00:36:39.262 read: IOPS=644, BW=2578KiB/s (2640kB/s)(25.2MiB/10004msec) 00:36:39.262 slat (nsec): min=5573, max=83770, avg=15962.84, stdev=12382.81 00:36:39.262 clat (usec): min=3547, max=61388, avg=24760.26, stdev=3710.25 00:36:39.262 lat (usec): min=3553, max=61407, avg=24776.22, stdev=3709.98 00:36:39.262 clat percentiles (usec): 00:36:39.262 | 1.00th=[15795], 5.00th=[19530], 10.00th=[21890], 20.00th=[23725], 00:36:39.262 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:36:39.262 | 70.00th=[25035], 80.00th=[25560], 90.00th=[27132], 95.00th=[31327], 00:36:39.262 | 99.00th=[36963], 99.50th=[40633], 99.90th=[61080], 99.95th=[61604], 00:36:39.262 | 99.99th=[61604] 00:36:39.262 bw ( KiB/s): min= 2356, max= 2704, per=4.01%, avg=2568.32, stdev=100.62, samples=19 00:36:39.262 iops : min= 589, max= 676, avg=642.05, stdev=25.15, samples=19 00:36:39.262 lat (msec) : 4=0.02%, 10=0.31%, 20=5.75%, 50=93.67%, 100=0.25% 00:36:39.262 cpu : usr=99.02%, sys=0.70%, ctx=21, majf=0, minf=48 00:36:39.262 IO depths : 1=0.1%, 2=0.2%, 4=2.7%, 8=80.3%, 16=16.7%, 32=0.0%, >=64=0.0% 00:36:39.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.262 complete : 0=0.0%, 4=89.4%, 8=9.0%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.262 issued rwts: total=6447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.262 filename2: (groupid=0, jobs=1): err= 0: pid=2458075: Fri Dec 6 13:44:24 2024 00:36:39.262 read: IOPS=673, BW=2695KiB/s (2760kB/s)(26.4MiB/10015msec) 00:36:39.262 slat (usec): min=5, max=130, avg=14.85, stdev=11.06 00:36:39.262 clat (usec): min=8194, max=41142, avg=23621.16, stdev=3555.28 00:36:39.262 lat (usec): min=8210, max=41166, avg=23636.01, stdev=3555.96 00:36:39.262 clat percentiles (usec): 00:36:39.262 | 1.00th=[13566], 5.00th=[16319], 10.00th=[17957], 20.00th=[23462], 00:36:39.262 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:36:39.262 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26346], 00:36:39.262 | 99.00th=[34341], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 00:36:39.262 | 99.99th=[41157] 00:36:39.262 bw ( KiB/s): min= 2554, max= 3120, per=4.21%, avg=2696.45, stdev=151.31, samples=20 00:36:39.262 iops : min= 638, max= 780, avg=674.05, stdev=37.86, samples=20 00:36:39.262 lat (msec) : 10=0.50%, 20=12.80%, 50=86.69% 00:36:39.262 cpu : usr=98.79%, sys=0.92%, ctx=11, majf=0, minf=38 00:36:39.262 IO depths : 1=2.1%, 2=6.3%, 4=20.3%, 8=60.8%, 16=10.5%, 32=0.0%, >=64=0.0% 00:36:39.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.262 complete : 0=0.0%, 4=93.1%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.262 issued rwts: total=6748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.262 filename2: (groupid=0, jobs=1): err= 0: pid=2458076: Fri Dec 6 13:44:24 2024 00:36:39.262 read: IOPS=653, BW=2614KiB/s (2677kB/s)(25.6MiB/10013msec) 00:36:39.262 slat (nsec): min=5698, max=79157, avg=15230.55, stdev=10635.74 00:36:39.262 clat (usec): min=11003, max=30149, avg=24353.30, stdev=1133.81 00:36:39.262 lat (usec): min=11012, max=30155, avg=24368.53, stdev=1132.95 00:36:39.262 clat percentiles (usec): 00:36:39.262 | 1.00th=[20317], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:39.262 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:36:39.262 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25822], 00:36:39.262 | 99.00th=[26608], 99.50th=[26608], 99.90th=[26870], 99.95th=[26870], 00:36:39.262 | 99.99th=[30278] 00:36:39.262 bw ( KiB/s): min= 2554, max= 2688, per=4.07%, avg=2610.00, stdev=64.76, samples=20 00:36:39.262 iops : min= 638, max= 672, avg=652.40, stdev=16.23, samples=20 00:36:39.262 lat (msec) : 20=0.98%, 50=99.02% 00:36:39.262 cpu : usr=98.76%, sys=0.79%, ctx=99, majf=0, minf=29 00:36:39.262 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:39.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.262 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.262 issued rwts: total=6544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.262 filename2: (groupid=0, jobs=1): err= 0: pid=2458077: Fri Dec 6 13:44:24 2024 00:36:39.262 read: IOPS=652, BW=2609KiB/s (2672kB/s)(25.5MiB/10007msec) 00:36:39.262 slat (nsec): min=5739, max=70520, avg=15888.79, stdev=10530.23 00:36:39.262 clat (usec): min=13351, max=30016, avg=24385.65, stdev=977.20 00:36:39.262 lat (usec): min=13357, max=30050, avg=24401.54, stdev=976.16 00:36:39.262 clat percentiles (usec): 00:36:39.262 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:39.262 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:36:39.262 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:36:39.262 | 99.00th=[26346], 99.50th=[26346], 99.90th=[26608], 99.95th=[26608], 00:36:39.262 | 99.99th=[30016] 00:36:39.262 bw ( KiB/s): min= 2554, max= 2688, per=4.07%, avg=2605.89, stdev=63.80, samples=19 00:36:39.262 iops : min= 638, max= 672, avg=651.37, stdev=15.99, samples=19 00:36:39.262 lat (msec) : 20=0.52%, 50=99.48% 00:36:39.262 cpu : usr=98.85%, sys=0.84%, ctx=62, majf=0, minf=28 00:36:39.262 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:39.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.262 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.262 issued rwts: total=6528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.262 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:39.262 00:36:39.262 Run status group 0 (all jobs): 00:36:39.262 READ: bw=62.5MiB/s (65.6MB/s), 2578KiB/s-3236KiB/s (2640kB/s-3313kB/s), io=628MiB (659MB), run=10004-10048msec 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.262 bdev_null0 00:36:39.262 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.263 [2024-12-06 13:44:24.495063] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.263 bdev_null1 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:39.263 { 00:36:39.263 "params": { 00:36:39.263 "name": "Nvme$subsystem", 00:36:39.263 "trtype": "$TEST_TRANSPORT", 00:36:39.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:39.263 "adrfam": "ipv4", 00:36:39.263 "trsvcid": "$NVMF_PORT", 00:36:39.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:39.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:39.263 "hdgst": ${hdgst:-false}, 00:36:39.263 "ddgst": ${ddgst:-false} 00:36:39.263 }, 00:36:39.263 "method": "bdev_nvme_attach_controller" 00:36:39.263 } 00:36:39.263 EOF 00:36:39.263 )") 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:39.263 { 00:36:39.263 "params": { 00:36:39.263 "name": "Nvme$subsystem", 00:36:39.263 "trtype": "$TEST_TRANSPORT", 00:36:39.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:39.263 "adrfam": "ipv4", 00:36:39.263 "trsvcid": "$NVMF_PORT", 00:36:39.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:39.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:39.263 "hdgst": ${hdgst:-false}, 00:36:39.263 "ddgst": ${ddgst:-false} 00:36:39.263 }, 00:36:39.263 "method": "bdev_nvme_attach_controller" 00:36:39.263 } 00:36:39.263 EOF 00:36:39.263 )") 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:39.263 "params": { 00:36:39.263 "name": "Nvme0", 00:36:39.263 "trtype": "tcp", 00:36:39.263 "traddr": "10.0.0.2", 00:36:39.263 "adrfam": "ipv4", 00:36:39.263 "trsvcid": "4420", 00:36:39.263 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:39.263 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:39.263 "hdgst": false, 00:36:39.263 "ddgst": false 00:36:39.263 }, 00:36:39.263 "method": "bdev_nvme_attach_controller" 00:36:39.263 },{ 00:36:39.263 "params": { 00:36:39.263 "name": "Nvme1", 00:36:39.263 "trtype": "tcp", 00:36:39.263 "traddr": "10.0.0.2", 00:36:39.263 "adrfam": "ipv4", 00:36:39.263 "trsvcid": "4420", 00:36:39.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:39.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:39.263 "hdgst": false, 00:36:39.263 "ddgst": false 00:36:39.263 }, 00:36:39.263 "method": "bdev_nvme_attach_controller" 00:36:39.263 }' 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:39.263 13:44:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:39.263 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:39.263 ... 00:36:39.263 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:39.263 ... 00:36:39.263 fio-3.35 00:36:39.263 Starting 4 threads 00:36:44.553 00:36:44.553 filename0: (groupid=0, jobs=1): err= 0: pid=2460540: Fri Dec 6 13:44:30 2024 00:36:44.553 read: IOPS=2970, BW=23.2MiB/s (24.3MB/s)(116MiB/5002msec) 00:36:44.553 slat (nsec): min=5507, max=55481, avg=8036.96, stdev=3494.50 00:36:44.553 clat (usec): min=1050, max=5828, avg=2671.14, stdev=265.71 00:36:44.553 lat (usec): min=1058, max=5850, avg=2679.18, stdev=265.63 00:36:44.553 clat percentiles (usec): 00:36:44.553 | 1.00th=[ 1876], 5.00th=[ 2180], 10.00th=[ 2376], 20.00th=[ 2540], 00:36:44.553 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:36:44.553 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 2999], 00:36:44.553 | 99.00th=[ 3425], 99.50th=[ 3621], 99.90th=[ 4293], 99.95th=[ 5604], 00:36:44.553 | 99.99th=[ 5604] 00:36:44.553 bw ( KiB/s): min=22752, max=24768, per=25.43%, avg=23769.50, stdev=616.73, samples=10 00:36:44.553 iops : min= 2844, max= 3096, avg=2971.10, stdev=77.12, samples=10 00:36:44.553 lat (msec) : 2=1.75%, 4=98.08%, 10=0.17% 00:36:44.553 cpu : usr=95.98%, sys=3.72%, ctx=6, majf=0, minf=50 00:36:44.553 IO depths : 1=0.1%, 2=0.4%, 4=71.8%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:44.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.553 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.553 issued rwts: total=14858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.553 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:44.553 filename0: (groupid=0, jobs=1): err= 0: pid=2460541: Fri Dec 6 13:44:30 2024 00:36:44.553 read: IOPS=2915, BW=22.8MiB/s (23.9MB/s)(114MiB/5001msec) 00:36:44.553 slat (nsec): min=5508, max=45501, avg=8095.02, stdev=3493.30 00:36:44.553 clat (usec): min=643, max=45780, avg=2722.49, stdev=1076.72 00:36:44.553 lat (usec): min=650, max=45802, avg=2730.59, stdev=1076.79 00:36:44.553 clat percentiles (usec): 00:36:44.553 | 1.00th=[ 1942], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2474], 00:36:44.553 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:36:44.553 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 3032], 95.00th=[ 3458], 00:36:44.553 | 99.00th=[ 4080], 99.50th=[ 4228], 99.90th=[ 4555], 99.95th=[45876], 00:36:44.553 | 99.99th=[45876] 00:36:44.553 bw ( KiB/s): min=21328, max=25168, per=24.68%, avg=23064.89, stdev=1036.64, samples=9 00:36:44.553 iops : min= 2666, max= 3146, avg=2883.11, stdev=129.58, samples=9 00:36:44.554 lat (usec) : 750=0.02% 00:36:44.554 lat (msec) : 2=1.29%, 4=96.98%, 10=1.65%, 50=0.05% 00:36:44.554 cpu : usr=95.70%, sys=4.00%, ctx=7, majf=0, minf=129 00:36:44.554 IO depths : 1=0.1%, 2=1.4%, 4=69.1%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:44.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.554 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.554 issued rwts: total=14581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.554 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:44.554 filename1: (groupid=0, jobs=1): err= 0: pid=2460542: Fri Dec 6 13:44:30 2024 00:36:44.554 read: IOPS=2897, BW=22.6MiB/s (23.7MB/s)(113MiB/5001msec) 00:36:44.554 slat (nsec): min=5508, max=78679, avg=9101.67, stdev=3876.99 00:36:44.554 clat (usec): min=911, max=6200, avg=2737.84, stdev=259.44 00:36:44.554 lat (usec): min=928, max=6223, avg=2746.95, stdev=259.45 00:36:44.554 clat percentiles (usec): 00:36:44.554 | 1.00th=[ 2040], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2671], 00:36:44.554 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:36:44.554 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2999], 95.00th=[ 3130], 00:36:44.554 | 99.00th=[ 3654], 99.50th=[ 3884], 99.90th=[ 4686], 99.95th=[ 4752], 00:36:44.554 | 99.99th=[ 5407] 00:36:44.554 bw ( KiB/s): min=22704, max=23776, per=24.87%, avg=23239.11, stdev=381.75, samples=9 00:36:44.554 iops : min= 2838, max= 2972, avg=2904.89, stdev=47.72, samples=9 00:36:44.554 lat (usec) : 1000=0.01% 00:36:44.554 lat (msec) : 2=0.59%, 4=98.99%, 10=0.41% 00:36:44.554 cpu : usr=96.32%, sys=3.38%, ctx=11, majf=0, minf=73 00:36:44.554 IO depths : 1=0.1%, 2=0.2%, 4=70.6%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:44.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.554 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.554 issued rwts: total=14489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.554 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:44.554 filename1: (groupid=0, jobs=1): err= 0: pid=2460544: Fri Dec 6 13:44:30 2024 00:36:44.554 read: IOPS=2900, BW=22.7MiB/s (23.8MB/s)(113MiB/5001msec) 00:36:44.554 slat (usec): min=8, max=153, avg= 9.55, stdev= 3.98 00:36:44.554 clat (usec): min=1429, max=5307, avg=2732.85, stdev=231.92 00:36:44.554 lat (usec): min=1437, max=5319, avg=2742.40, stdev=231.81 00:36:44.554 clat percentiles (usec): 00:36:44.554 | 1.00th=[ 2147], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2638], 00:36:44.554 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:36:44.554 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2966], 95.00th=[ 3064], 00:36:44.554 | 99.00th=[ 3589], 99.50th=[ 3949], 99.90th=[ 4621], 99.95th=[ 5080], 00:36:44.554 | 99.99th=[ 5276] 00:36:44.554 bw ( KiB/s): min=22672, max=23680, per=24.83%, avg=23207.60, stdev=361.15, samples=10 00:36:44.554 iops : min= 2834, max= 2960, avg=2900.80, stdev=45.07, samples=10 00:36:44.554 lat (msec) : 2=0.32%, 4=99.26%, 10=0.42% 00:36:44.554 cpu : usr=96.42%, sys=3.28%, ctx=10, majf=0, minf=60 00:36:44.554 IO depths : 1=0.1%, 2=0.1%, 4=71.2%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:44.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.554 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.554 issued rwts: total=14507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.554 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:44.554 00:36:44.554 Run status group 0 (all jobs): 00:36:44.554 READ: bw=91.3MiB/s (95.7MB/s), 22.6MiB/s-23.2MiB/s (23.7MB/s-24.3MB/s), io=457MiB (479MB), run=5001-5002msec 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.554 00:36:44.554 real 0m24.609s 00:36:44.554 user 5m19.910s 00:36:44.554 sys 0m4.977s 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:44.554 13:44:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.554 ************************************ 00:36:44.554 END TEST fio_dif_rand_params 00:36:44.554 ************************************ 00:36:44.554 13:44:30 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:44.554 13:44:30 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:44.554 13:44:30 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:44.554 13:44:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:44.554 ************************************ 00:36:44.554 START TEST fio_dif_digest 00:36:44.554 ************************************ 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:44.554 bdev_null0 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:44.554 [2024-12-06 13:44:31.078189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.554 13:44:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:44.554 { 00:36:44.554 "params": { 00:36:44.554 "name": "Nvme$subsystem", 00:36:44.554 "trtype": "$TEST_TRANSPORT", 00:36:44.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:44.554 "adrfam": "ipv4", 00:36:44.554 "trsvcid": "$NVMF_PORT", 00:36:44.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:44.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:44.555 "hdgst": ${hdgst:-false}, 00:36:44.555 "ddgst": ${ddgst:-false} 00:36:44.555 }, 00:36:44.555 "method": "bdev_nvme_attach_controller" 00:36:44.555 } 00:36:44.555 EOF 00:36:44.555 )") 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:44.555 "params": { 00:36:44.555 "name": "Nvme0", 00:36:44.555 "trtype": "tcp", 00:36:44.555 "traddr": "10.0.0.2", 00:36:44.555 "adrfam": "ipv4", 00:36:44.555 "trsvcid": "4420", 00:36:44.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:44.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:44.555 "hdgst": true, 00:36:44.555 "ddgst": true 00:36:44.555 }, 00:36:44.555 "method": "bdev_nvme_attach_controller" 00:36:44.555 }' 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:44.555 13:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:45.128 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:45.128 ... 00:36:45.129 fio-3.35 00:36:45.129 Starting 3 threads 00:36:57.360 00:36:57.360 filename0: (groupid=0, jobs=1): err= 0: pid=2461768: Fri Dec 6 13:44:42 2024 00:36:57.360 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(354MiB/10004msec) 00:36:57.360 slat (nsec): min=5896, max=33216, avg=7780.12, stdev=1696.36 00:36:57.360 clat (usec): min=7885, max=52760, avg=10583.81, stdev=1576.65 00:36:57.360 lat (usec): min=7892, max=52767, avg=10591.59, stdev=1576.66 00:36:57.360 clat percentiles (usec): 00:36:57.360 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:36:57.360 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:36:57.360 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:36:57.360 | 99.00th=[12518], 99.50th=[12649], 99.90th=[51643], 99.95th=[52691], 00:36:57.360 | 99.99th=[52691] 00:36:57.360 bw ( KiB/s): min=35072, max=37376, per=31.86%, avg=36217.26, stdev=561.96, samples=19 00:36:57.360 iops : min= 274, max= 292, avg=282.95, stdev= 4.39, samples=19 00:36:57.360 lat (msec) : 10=24.71%, 20=75.19%, 100=0.11% 00:36:57.360 cpu : usr=95.63%, sys=4.13%, ctx=17, majf=0, minf=96 00:36:57.360 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:57.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.360 issued rwts: total=2833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.360 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:57.360 filename0: (groupid=0, jobs=1): err= 0: pid=2461769: Fri Dec 6 13:44:42 2024 00:36:57.360 read: IOPS=274, BW=34.3MiB/s (35.9MB/s)(344MiB/10045msec) 00:36:57.360 slat (nsec): min=5896, max=32139, avg=7344.80, stdev=1598.42 00:36:57.360 clat (usec): min=6918, max=50322, avg=10917.34, stdev=1316.58 00:36:57.360 lat (usec): min=6926, max=50330, avg=10924.69, stdev=1316.65 00:36:57.360 clat percentiles (usec): 00:36:57.360 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:36:57.360 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:36:57.360 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:36:57.360 | 99.00th=[13042], 99.50th=[13304], 99.90th=[15795], 99.95th=[45351], 00:36:57.360 | 99.99th=[50070] 00:36:57.360 bw ( KiB/s): min=34304, max=36096, per=30.99%, avg=35225.60, stdev=472.77, samples=20 00:36:57.360 iops : min= 268, max= 282, avg=275.20, stdev= 3.69, samples=20 00:36:57.360 lat (msec) : 10=13.44%, 20=86.49%, 50=0.04%, 100=0.04% 00:36:57.360 cpu : usr=95.69%, sys=4.07%, ctx=21, majf=0, minf=195 00:36:57.360 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:57.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.360 issued rwts: total=2754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.360 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:57.360 filename0: (groupid=0, jobs=1): err= 0: pid=2461770: Fri Dec 6 13:44:42 2024 00:36:57.360 read: IOPS=331, BW=41.5MiB/s (43.5MB/s)(417MiB/10046msec) 00:36:57.360 slat (nsec): min=6098, max=35275, avg=8527.13, stdev=1617.66 00:36:57.360 clat (usec): min=5744, max=49477, avg=9013.47, stdev=1193.56 00:36:57.360 lat (usec): min=5752, max=49485, avg=9022.00, stdev=1193.53 00:36:57.360 clat percentiles (usec): 00:36:57.360 | 1.00th=[ 7308], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455], 00:36:57.360 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:36:57.360 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:36:57.360 | 99.00th=[10683], 99.50th=[10945], 99.90th=[11469], 99.95th=[45876], 00:36:57.360 | 99.99th=[49546] 00:36:57.360 bw ( KiB/s): min=41728, max=43520, per=37.53%, avg=42662.40, stdev=571.08, samples=20 00:36:57.360 iops : min= 326, max= 340, avg=333.30, stdev= 4.46, samples=20 00:36:57.360 lat (msec) : 10=92.41%, 20=7.53%, 50=0.06% 00:36:57.360 cpu : usr=93.97%, sys=5.77%, ctx=23, majf=0, minf=196 00:36:57.360 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:57.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.360 issued rwts: total=3335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.360 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:57.360 00:36:57.360 Run status group 0 (all jobs): 00:36:57.360 READ: bw=111MiB/s (116MB/s), 34.3MiB/s-41.5MiB/s (35.9MB/s-43.5MB/s), io=1115MiB (1169MB), run=10004-10046msec 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.360 00:36:57.360 real 0m11.276s 00:36:57.360 user 0m40.370s 00:36:57.360 sys 0m1.749s 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:57.360 13:44:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:57.360 ************************************ 00:36:57.360 END TEST fio_dif_digest 00:36:57.360 ************************************ 00:36:57.360 13:44:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:57.360 13:44:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:57.360 13:44:42 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:57.360 13:44:42 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:57.360 13:44:42 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:57.360 13:44:42 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:57.360 13:44:42 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:57.360 13:44:42 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:57.360 rmmod nvme_tcp 00:36:57.360 rmmod nvme_fabrics 00:36:57.360 rmmod nvme_keyring 00:36:57.360 13:44:42 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:57.360 13:44:42 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:57.360 13:44:42 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:57.360 13:44:42 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2451523 ']' 00:36:57.360 13:44:42 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2451523 00:36:57.360 13:44:42 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2451523 ']' 00:36:57.360 13:44:42 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2451523 00:36:57.360 13:44:42 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:36:57.360 13:44:42 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:57.360 13:44:42 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2451523 00:36:57.360 13:44:42 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:57.360 13:44:42 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:57.360 13:44:42 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2451523' 00:36:57.360 killing process with pid 2451523 00:36:57.360 13:44:42 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2451523 00:36:57.360 13:44:42 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2451523 00:36:57.360 13:44:42 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:57.360 13:44:42 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:59.276 Waiting for block devices as requested 00:36:59.536 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:59.536 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:59.536 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:59.797 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:59.797 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:59.797 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:00.058 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:00.058 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:00.058 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:00.320 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:00.320 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:00.320 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:00.581 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:00.581 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:00.581 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:00.843 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:00.843 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:00.843 13:44:47 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:00.843 13:44:47 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:00.843 13:44:47 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:00.843 13:44:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:00.843 13:44:47 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:00.843 13:44:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:00.843 13:44:47 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:00.843 13:44:47 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:00.843 13:44:47 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:00.843 13:44:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:00.843 13:44:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.404 13:44:49 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:03.404 00:37:03.404 real 1m18.137s 00:37:03.404 user 8m1.309s 00:37:03.404 sys 0m22.239s 00:37:03.404 13:44:49 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.404 13:44:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:03.404 ************************************ 00:37:03.404 END TEST nvmf_dif 00:37:03.404 ************************************ 00:37:03.404 13:44:49 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:03.404 13:44:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:03.404 13:44:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:03.404 13:44:49 -- common/autotest_common.sh@10 -- # set +x 00:37:03.404 ************************************ 00:37:03.404 START TEST nvmf_abort_qd_sizes 00:37:03.404 ************************************ 00:37:03.404 13:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:03.404 * Looking for test storage... 00:37:03.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:03.404 13:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:03.404 13:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:37:03.404 13:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:03.404 13:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:03.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.405 --rc genhtml_branch_coverage=1 00:37:03.405 --rc genhtml_function_coverage=1 00:37:03.405 --rc genhtml_legend=1 00:37:03.405 --rc geninfo_all_blocks=1 00:37:03.405 --rc geninfo_unexecuted_blocks=1 00:37:03.405 00:37:03.405 ' 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:03.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.405 --rc genhtml_branch_coverage=1 00:37:03.405 --rc genhtml_function_coverage=1 00:37:03.405 --rc genhtml_legend=1 00:37:03.405 --rc geninfo_all_blocks=1 00:37:03.405 --rc geninfo_unexecuted_blocks=1 00:37:03.405 00:37:03.405 ' 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:03.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.405 --rc genhtml_branch_coverage=1 00:37:03.405 --rc genhtml_function_coverage=1 00:37:03.405 --rc genhtml_legend=1 00:37:03.405 --rc geninfo_all_blocks=1 00:37:03.405 --rc geninfo_unexecuted_blocks=1 00:37:03.405 00:37:03.405 ' 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:03.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.405 --rc genhtml_branch_coverage=1 00:37:03.405 --rc genhtml_function_coverage=1 00:37:03.405 --rc genhtml_legend=1 00:37:03.405 --rc geninfo_all_blocks=1 00:37:03.405 --rc geninfo_unexecuted_blocks=1 00:37:03.405 00:37:03.405 ' 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:03.405 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:03.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:03.406 13:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:11.568 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:11.568 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.568 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:11.569 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:11.569 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:11.569 13:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:11.569 13:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:11.569 13:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:11.569 13:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:11.569 13:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:11.569 13:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:11.569 13:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:11.569 13:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:11.569 13:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:11.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:11.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:37:11.569 00:37:11.569 --- 10.0.0.2 ping statistics --- 00:37:11.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.569 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:37:11.569 13:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:11.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:11.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:37:11.569 00:37:11.569 --- 10.0.0.1 ping statistics --- 00:37:11.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.569 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:37:11.569 13:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:11.569 13:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:11.569 13:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:11.569 13:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:14.117 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:14.117 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:14.378 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2471252 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2471252 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2471252 ']' 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:14.378 13:45:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:14.378 [2024-12-06 13:45:00.993044] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:37:14.378 [2024-12-06 13:45:00.993110] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.639 [2024-12-06 13:45:01.096286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:14.639 [2024-12-06 13:45:01.152609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.639 [2024-12-06 13:45:01.152666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.639 [2024-12-06 13:45:01.152675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.639 [2024-12-06 13:45:01.152683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.639 [2024-12-06 13:45:01.152689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.639 [2024-12-06 13:45:01.155145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.639 [2024-12-06 13:45:01.155353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.639 [2024-12-06 13:45:01.155353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:14.639 [2024-12-06 13:45:01.155183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:15.210 13:45:01 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:15.472 13:45:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:15.472 13:45:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:15.472 13:45:01 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:15.472 13:45:01 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:15.472 13:45:01 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:15.472 13:45:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:15.472 13:45:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:15.472 13:45:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:15.472 13:45:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:15.472 13:45:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:15.472 13:45:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:15.472 ************************************ 00:37:15.472 START TEST spdk_target_abort 00:37:15.472 ************************************ 00:37:15.472 13:45:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:15.472 13:45:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:15.472 13:45:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:15.472 13:45:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.472 13:45:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:15.734 spdk_targetn1 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:15.734 [2024-12-06 13:45:02.234231] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:15.734 [2024-12-06 13:45:02.282646] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:15.734 13:45:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:15.995 [2024-12-06 13:45:02.437397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:280 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:15.995 [2024-12-06 13:45:02.437449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0025 p:1 m:0 dnr:0 00:37:15.995 [2024-12-06 13:45:02.437959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:312 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:15.995 [2024-12-06 13:45:02.437978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0028 p:1 m:0 dnr:0 00:37:15.995 [2024-12-06 13:45:02.451984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:672 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:15.995 [2024-12-06 13:45:02.452015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0056 p:1 m:0 dnr:0 00:37:15.995 [2024-12-06 13:45:02.452923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:720 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:15.995 [2024-12-06 13:45:02.452943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:005b p:1 m:0 dnr:0 00:37:15.995 [2024-12-06 13:45:02.460011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:896 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:15.996 [2024-12-06 13:45:02.460040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0071 p:1 m:0 dnr:0 00:37:15.996 [2024-12-06 13:45:02.476036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1384 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:15.996 [2024-12-06 13:45:02.476066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00ae p:1 m:0 dnr:0 00:37:15.996 [2024-12-06 13:45:02.532063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3000 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:15.996 [2024-12-06 13:45:02.532096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:15.996 [2024-12-06 13:45:02.596035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3576 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:15.996 [2024-12-06 13:45:02.596067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00c1 p:0 m:0 dnr:0 00:37:15.996 [2024-12-06 13:45:02.604637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3840 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:15.996 [2024-12-06 13:45:02.604668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00e2 p:0 m:0 dnr:0 00:37:19.300 Initializing NVMe Controllers 00:37:19.300 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:19.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:19.300 Initialization complete. Launching workers. 00:37:19.300 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10392, failed: 9 00:37:19.300 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2437, failed to submit 7964 00:37:19.300 success 711, unsuccessful 1726, failed 0 00:37:19.300 13:45:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:19.300 13:45:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:19.300 [2024-12-06 13:45:05.851572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:1152 len:8 PRP1 0x200004e40000 PRP2 0x0 00:37:19.300 [2024-12-06 13:45:05.851619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:37:19.300 [2024-12-06 13:45:05.891576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:2080 len:8 PRP1 0x200004e42000 PRP2 0x0 00:37:19.300 [2024-12-06 13:45:05.891601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:19.300 [2024-12-06 13:45:05.907570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:2456 len:8 PRP1 0x200004e3c000 PRP2 0x0 00:37:19.300 [2024-12-06 13:45:05.907592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:19.300 [2024-12-06 13:45:05.955263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:3464 len:8 PRP1 0x200004e42000 PRP2 0x0 00:37:19.300 [2024-12-06 13:45:05.955284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00bd p:0 m:0 dnr:0 00:37:22.595 [2024-12-06 13:45:08.769301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:67960 len:8 PRP1 0x200004e5e000 PRP2 0x0 00:37:22.595 [2024-12-06 13:45:08.769328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:22.595 [2024-12-06 13:45:08.938871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae630 is same with the state(6) to be set 00:37:22.595 Initializing NVMe Controllers 00:37:22.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:22.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:22.595 Initialization complete. Launching workers. 00:37:22.595 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8614, failed: 5 00:37:22.595 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1203, failed to submit 7416 00:37:22.595 success 354, unsuccessful 849, failed 0 00:37:22.595 13:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:22.595 13:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:25.891 Initializing NVMe Controllers 00:37:25.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:25.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:25.891 Initialization complete. Launching workers. 00:37:25.891 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43687, failed: 0 00:37:25.891 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2781, failed to submit 40906 00:37:25.891 success 611, unsuccessful 2170, failed 0 00:37:25.891 13:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:25.891 13:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.891 13:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:25.891 13:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.891 13:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:25.891 13:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.891 13:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:27.798 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.798 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2471252 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2471252 ']' 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2471252 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471252 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471252' 00:37:27.799 killing process with pid 2471252 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2471252 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2471252 00:37:27.799 00:37:27.799 real 0m12.287s 00:37:27.799 user 0m50.022s 00:37:27.799 sys 0m2.028s 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:27.799 ************************************ 00:37:27.799 END TEST spdk_target_abort 00:37:27.799 ************************************ 00:37:27.799 13:45:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:27.799 13:45:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:27.799 13:45:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:27.799 13:45:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:27.799 ************************************ 00:37:27.799 START TEST kernel_target_abort 00:37:27.799 ************************************ 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:27.799 13:45:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:31.095 Waiting for block devices as requested 00:37:31.095 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:31.095 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:31.354 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:31.354 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:31.354 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:31.614 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:31.614 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:31.614 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:31.874 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:31.874 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:31.874 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:32.134 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:32.134 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:32.134 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:32.394 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:32.394 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:32.394 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:32.656 No valid GPT data, bailing 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:32.656 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:32.657 00:37:32.657 Discovery Log Number of Records 2, Generation counter 2 00:37:32.657 =====Discovery Log Entry 0====== 00:37:32.657 trtype: tcp 00:37:32.657 adrfam: ipv4 00:37:32.657 subtype: current discovery subsystem 00:37:32.657 treq: not specified, sq flow control disable supported 00:37:32.657 portid: 1 00:37:32.657 trsvcid: 4420 00:37:32.657 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:32.657 traddr: 10.0.0.1 00:37:32.657 eflags: none 00:37:32.657 sectype: none 00:37:32.657 =====Discovery Log Entry 1====== 00:37:32.657 trtype: tcp 00:37:32.657 adrfam: ipv4 00:37:32.657 subtype: nvme subsystem 00:37:32.657 treq: not specified, sq flow control disable supported 00:37:32.657 portid: 1 00:37:32.657 trsvcid: 4420 00:37:32.657 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:32.657 traddr: 10.0.0.1 00:37:32.657 eflags: none 00:37:32.657 sectype: none 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:32.657 13:45:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:36.040 Initializing NVMe Controllers 00:37:36.040 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:36.040 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:36.040 Initialization complete. Launching workers. 00:37:36.040 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67652, failed: 0 00:37:36.040 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67652, failed to submit 0 00:37:36.040 success 0, unsuccessful 67652, failed 0 00:37:36.040 13:45:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:36.040 13:45:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:39.341 Initializing NVMe Controllers 00:37:39.341 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:39.341 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:39.341 Initialization complete. Launching workers. 00:37:39.341 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 121469, failed: 0 00:37:39.341 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30578, failed to submit 90891 00:37:39.341 success 0, unsuccessful 30578, failed 0 00:37:39.341 13:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:39.341 13:45:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:41.876 Initializing NVMe Controllers 00:37:41.876 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:41.876 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:41.876 Initialization complete. Launching workers. 00:37:41.876 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146323, failed: 0 00:37:41.876 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36614, failed to submit 109709 00:37:41.876 success 0, unsuccessful 36614, failed 0 00:37:41.876 13:45:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:41.876 13:45:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:41.876 13:45:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:41.876 13:45:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:41.876 13:45:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:41.876 13:45:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:41.876 13:45:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:41.876 13:45:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:41.876 13:45:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:41.876 13:45:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:46.077 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:46.077 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:46.077 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:46.077 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:46.077 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:46.077 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:46.077 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:46.077 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:46.077 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:46.078 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:46.078 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:46.078 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:46.078 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:46.078 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:46.078 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:46.078 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:47.460 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:47.460 00:37:47.460 real 0m19.529s 00:37:47.460 user 0m9.617s 00:37:47.460 sys 0m5.691s 00:37:47.460 13:45:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:47.460 13:45:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:47.460 ************************************ 00:37:47.460 END TEST kernel_target_abort 00:37:47.460 ************************************ 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:47.460 rmmod nvme_tcp 00:37:47.460 rmmod nvme_fabrics 00:37:47.460 rmmod nvme_keyring 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2471252 ']' 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2471252 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2471252 ']' 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2471252 00:37:47.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2471252) - No such process 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2471252 is not found' 00:37:47.460 Process with pid 2471252 is not found 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:47.460 13:45:33 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:50.758 Waiting for block devices as requested 00:37:50.758 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:50.758 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:51.017 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:51.017 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:51.017 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:51.276 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:51.276 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:51.276 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:51.537 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:51.537 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:51.796 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:51.796 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:51.796 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:52.056 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:52.056 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:52.056 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:52.316 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:52.316 13:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:52.316 13:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:52.316 13:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:52.316 13:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:52.316 13:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:52.316 13:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:52.316 13:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:52.316 13:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:52.316 13:45:38 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:52.316 13:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:52.316 13:45:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:54.226 13:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:54.226 00:37:54.226 real 0m51.323s 00:37:54.226 user 1m4.919s 00:37:54.226 sys 0m18.583s 00:37:54.226 13:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:54.226 13:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:54.226 ************************************ 00:37:54.226 END TEST nvmf_abort_qd_sizes 00:37:54.226 ************************************ 00:37:54.487 13:45:40 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:54.487 13:45:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:54.487 13:45:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:54.487 13:45:40 -- common/autotest_common.sh@10 -- # set +x 00:37:54.487 ************************************ 00:37:54.487 START TEST keyring_file 00:37:54.487 ************************************ 00:37:54.487 13:45:40 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:54.487 * Looking for test storage... 00:37:54.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:54.487 13:45:41 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:54.487 13:45:41 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:37:54.487 13:45:41 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:54.487 13:45:41 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:54.487 13:45:41 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:54.487 13:45:41 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:54.487 13:45:41 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:54.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.487 --rc genhtml_branch_coverage=1 00:37:54.487 --rc genhtml_function_coverage=1 00:37:54.487 --rc genhtml_legend=1 00:37:54.487 --rc geninfo_all_blocks=1 00:37:54.487 --rc geninfo_unexecuted_blocks=1 00:37:54.487 00:37:54.487 ' 00:37:54.487 13:45:41 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:54.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.487 --rc genhtml_branch_coverage=1 00:37:54.487 --rc genhtml_function_coverage=1 00:37:54.487 --rc genhtml_legend=1 00:37:54.487 --rc geninfo_all_blocks=1 00:37:54.487 --rc geninfo_unexecuted_blocks=1 00:37:54.487 00:37:54.487 ' 00:37:54.487 13:45:41 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:54.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.487 --rc genhtml_branch_coverage=1 00:37:54.487 --rc genhtml_function_coverage=1 00:37:54.487 --rc genhtml_legend=1 00:37:54.487 --rc geninfo_all_blocks=1 00:37:54.487 --rc geninfo_unexecuted_blocks=1 00:37:54.487 00:37:54.487 ' 00:37:54.487 13:45:41 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:54.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:54.487 --rc genhtml_branch_coverage=1 00:37:54.487 --rc genhtml_function_coverage=1 00:37:54.487 --rc genhtml_legend=1 00:37:54.487 --rc geninfo_all_blocks=1 00:37:54.487 --rc geninfo_unexecuted_blocks=1 00:37:54.487 00:37:54.487 ' 00:37:54.487 13:45:41 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:54.487 13:45:41 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:54.487 13:45:41 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:54.487 13:45:41 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:54.487 13:45:41 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:54.487 13:45:41 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:54.487 13:45:41 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:54.487 13:45:41 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:54.487 13:45:41 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:54.487 13:45:41 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:54.487 13:45:41 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:54.487 13:45:41 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:54.749 13:45:41 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:54.749 13:45:41 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:54.749 13:45:41 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:54.749 13:45:41 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:54.749 13:45:41 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.749 13:45:41 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.749 13:45:41 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.749 13:45:41 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:54.749 13:45:41 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:54.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:54.749 13:45:41 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:54.749 13:45:41 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:54.749 13:45:41 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:54.749 13:45:41 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:54.749 13:45:41 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:54.749 13:45:41 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.W5ybF5nRxP 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.W5ybF5nRxP 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.W5ybF5nRxP 00:37:54.749 13:45:41 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.W5ybF5nRxP 00:37:54.749 13:45:41 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BNld8jUppJ 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:54.749 13:45:41 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BNld8jUppJ 00:37:54.749 13:45:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BNld8jUppJ 00:37:54.750 13:45:41 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BNld8jUppJ 00:37:54.750 13:45:41 keyring_file -- keyring/file.sh@30 -- # tgtpid=2481739 00:37:54.750 13:45:41 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2481739 00:37:54.750 13:45:41 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:54.750 13:45:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2481739 ']' 00:37:54.750 13:45:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:54.750 13:45:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:54.750 13:45:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:54.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:54.750 13:45:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:54.750 13:45:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:54.750 [2024-12-06 13:45:41.346634] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:37:54.750 [2024-12-06 13:45:41.346703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2481739 ] 00:37:55.010 [2024-12-06 13:45:41.422172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.010 [2024-12-06 13:45:41.476394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:55.581 13:45:42 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:55.581 [2024-12-06 13:45:42.167034] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:55.581 null0 00:37:55.581 [2024-12-06 13:45:42.199070] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:55.581 [2024-12-06 13:45:42.199501] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.581 13:45:42 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.581 13:45:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:55.581 [2024-12-06 13:45:42.231137] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:55.581 request: 00:37:55.581 { 00:37:55.581 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:55.581 "secure_channel": false, 00:37:55.581 "listen_address": { 00:37:55.581 "trtype": "tcp", 00:37:55.581 "traddr": "127.0.0.1", 00:37:55.581 "trsvcid": "4420" 00:37:55.581 }, 00:37:55.581 "method": "nvmf_subsystem_add_listener", 00:37:55.581 "req_id": 1 00:37:55.581 } 00:37:55.581 Got JSON-RPC error response 00:37:55.581 response: 00:37:55.581 { 00:37:55.581 "code": -32602, 00:37:55.841 "message": "Invalid parameters" 00:37:55.841 } 00:37:55.841 13:45:42 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:55.841 13:45:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:55.841 13:45:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:55.841 13:45:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:55.841 13:45:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:55.841 13:45:42 keyring_file -- keyring/file.sh@47 -- # bperfpid=2482009 00:37:55.841 13:45:42 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2482009 /var/tmp/bperf.sock 00:37:55.841 13:45:42 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:55.841 13:45:42 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2482009 ']' 00:37:55.841 13:45:42 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:55.841 13:45:42 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:55.841 13:45:42 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:55.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:55.841 13:45:42 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:55.841 13:45:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:55.841 [2024-12-06 13:45:42.292764] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:37:55.841 [2024-12-06 13:45:42.292827] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2482009 ] 00:37:55.841 [2024-12-06 13:45:42.383124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.841 [2024-12-06 13:45:42.435307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:56.779 13:45:43 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:56.779 13:45:43 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:56.779 13:45:43 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.W5ybF5nRxP 00:37:56.779 13:45:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.W5ybF5nRxP 00:37:56.780 13:45:43 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BNld8jUppJ 00:37:56.780 13:45:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BNld8jUppJ 00:37:57.039 13:45:43 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:57.039 13:45:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:57.039 13:45:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:57.039 13:45:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:57.039 13:45:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:57.039 13:45:43 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.W5ybF5nRxP == \/\t\m\p\/\t\m\p\.\W\5\y\b\F\5\n\R\x\P ]] 00:37:57.039 13:45:43 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:57.039 13:45:43 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:57.039 13:45:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:57.039 13:45:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:57.039 13:45:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:57.299 13:45:43 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.BNld8jUppJ == \/\t\m\p\/\t\m\p\.\B\N\l\d\8\j\U\p\p\J ]] 00:37:57.299 13:45:43 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:57.299 13:45:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:57.299 13:45:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:57.299 13:45:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:57.299 13:45:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:57.299 13:45:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:57.558 13:45:44 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:57.558 13:45:44 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:57.558 13:45:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:57.558 13:45:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:57.558 13:45:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:57.558 13:45:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:57.558 13:45:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:57.818 13:45:44 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:57.818 13:45:44 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:57.818 13:45:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:57.818 [2024-12-06 13:45:44.405878] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:58.078 nvme0n1 00:37:58.078 13:45:44 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:58.078 13:45:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:58.078 13:45:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:58.078 13:45:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:58.078 13:45:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:58.078 13:45:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:58.078 13:45:44 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:58.078 13:45:44 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:58.078 13:45:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:58.078 13:45:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:58.078 13:45:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:58.078 13:45:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:58.078 13:45:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:58.338 13:45:44 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:58.338 13:45:44 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:58.601 Running I/O for 1 seconds... 00:37:59.536 17546.00 IOPS, 68.54 MiB/s 00:37:59.536 Latency(us) 00:37:59.536 [2024-12-06T12:45:46.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:59.536 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:59.536 nvme0n1 : 1.00 17610.25 68.79 0.00 0.00 7255.74 2170.88 17476.27 00:37:59.536 [2024-12-06T12:45:46.195Z] =================================================================================================================== 00:37:59.536 [2024-12-06T12:45:46.195Z] Total : 17610.25 68.79 0.00 0.00 7255.74 2170.88 17476.27 00:37:59.536 { 00:37:59.536 "results": [ 00:37:59.536 { 00:37:59.536 "job": "nvme0n1", 00:37:59.536 "core_mask": "0x2", 00:37:59.536 "workload": "randrw", 00:37:59.536 "percentage": 50, 00:37:59.536 "status": "finished", 00:37:59.536 "queue_depth": 128, 00:37:59.536 "io_size": 4096, 00:37:59.536 "runtime": 1.003677, 00:37:59.536 "iops": 17610.24712133485, 00:37:59.536 "mibps": 68.79002781771426, 00:37:59.536 "io_failed": 0, 00:37:59.536 "io_timeout": 0, 00:37:59.536 "avg_latency_us": 7255.743233569072, 00:37:59.536 "min_latency_us": 2170.88, 00:37:59.536 "max_latency_us": 17476.266666666666 00:37:59.536 } 00:37:59.536 ], 00:37:59.536 "core_count": 1 00:37:59.536 } 00:37:59.536 13:45:46 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:59.536 13:45:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:59.795 13:45:46 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:59.795 13:45:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:59.795 13:45:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:59.795 13:45:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:59.795 13:45:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:59.795 13:45:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:59.795 13:45:46 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:59.795 13:45:46 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:59.795 13:45:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:59.795 13:45:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:59.795 13:45:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:59.795 13:45:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:59.795 13:45:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.053 13:45:46 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:00.053 13:45:46 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:00.053 13:45:46 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:00.054 13:45:46 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:00.054 13:45:46 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:00.054 13:45:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:00.054 13:45:46 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:00.054 13:45:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:00.054 13:45:46 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:00.054 13:45:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:00.313 [2024-12-06 13:45:46.732201] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:00.313 [2024-12-06 13:45:46.732291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a6870 (107): Transport endpoint is not connected 00:38:00.313 [2024-12-06 13:45:46.733286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a6870 (9): Bad file descriptor 00:38:00.313 [2024-12-06 13:45:46.734288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:00.313 [2024-12-06 13:45:46.734298] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:00.313 [2024-12-06 13:45:46.734304] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:00.313 [2024-12-06 13:45:46.734310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:00.313 request: 00:38:00.313 { 00:38:00.313 "name": "nvme0", 00:38:00.313 "trtype": "tcp", 00:38:00.313 "traddr": "127.0.0.1", 00:38:00.313 "adrfam": "ipv4", 00:38:00.313 "trsvcid": "4420", 00:38:00.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:00.313 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:00.313 "prchk_reftag": false, 00:38:00.313 "prchk_guard": false, 00:38:00.313 "hdgst": false, 00:38:00.313 "ddgst": false, 00:38:00.313 "psk": "key1", 00:38:00.313 "allow_unrecognized_csi": false, 00:38:00.313 "method": "bdev_nvme_attach_controller", 00:38:00.313 "req_id": 1 00:38:00.313 } 00:38:00.313 Got JSON-RPC error response 00:38:00.313 response: 00:38:00.313 { 00:38:00.313 "code": -5, 00:38:00.313 "message": "Input/output error" 00:38:00.313 } 00:38:00.313 13:45:46 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:00.313 13:45:46 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:00.313 13:45:46 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:00.313 13:45:46 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:00.313 13:45:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:00.313 13:45:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:00.313 13:45:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:00.313 13:45:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:00.313 13:45:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:00.313 13:45:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.313 13:45:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:00.313 13:45:46 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:00.313 13:45:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:00.313 13:45:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:00.313 13:45:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:00.313 13:45:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:00.313 13:45:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:00.573 13:45:47 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:00.573 13:45:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:00.573 13:45:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:00.833 13:45:47 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:00.833 13:45:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:00.833 13:45:47 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:00.833 13:45:47 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:00.833 13:45:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:01.094 13:45:47 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:01.094 13:45:47 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.W5ybF5nRxP 00:38:01.094 13:45:47 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.W5ybF5nRxP 00:38:01.094 13:45:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:01.094 13:45:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.W5ybF5nRxP 00:38:01.094 13:45:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:01.094 13:45:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:01.094 13:45:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:01.094 13:45:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:01.094 13:45:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.W5ybF5nRxP 00:38:01.094 13:45:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.W5ybF5nRxP 00:38:01.354 [2024-12-06 13:45:47.766987] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.W5ybF5nRxP': 0100660 00:38:01.354 [2024-12-06 13:45:47.767006] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:01.354 request: 00:38:01.354 { 00:38:01.354 "name": "key0", 00:38:01.354 "path": "/tmp/tmp.W5ybF5nRxP", 00:38:01.354 "method": "keyring_file_add_key", 00:38:01.354 "req_id": 1 00:38:01.354 } 00:38:01.354 Got JSON-RPC error response 00:38:01.354 response: 00:38:01.354 { 00:38:01.354 "code": -1, 00:38:01.354 "message": "Operation not permitted" 00:38:01.354 } 00:38:01.354 13:45:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:01.354 13:45:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:01.354 13:45:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:01.354 13:45:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:01.354 13:45:47 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.W5ybF5nRxP 00:38:01.354 13:45:47 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.W5ybF5nRxP 00:38:01.354 13:45:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.W5ybF5nRxP 00:38:01.354 13:45:47 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.W5ybF5nRxP 00:38:01.354 13:45:47 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:01.354 13:45:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:01.354 13:45:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:01.354 13:45:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:01.354 13:45:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:01.354 13:45:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:01.616 13:45:48 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:01.616 13:45:48 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:01.616 13:45:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:01.616 13:45:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:01.616 13:45:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:01.616 13:45:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:01.616 13:45:48 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:01.616 13:45:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:01.616 13:45:48 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:01.616 13:45:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:01.877 [2024-12-06 13:45:48.332422] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.W5ybF5nRxP': No such file or directory 00:38:01.877 [2024-12-06 13:45:48.332436] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:01.877 [2024-12-06 13:45:48.332450] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:01.877 [2024-12-06 13:45:48.332459] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:01.877 [2024-12-06 13:45:48.332465] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:01.877 [2024-12-06 13:45:48.332470] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:01.877 request: 00:38:01.877 { 00:38:01.877 "name": "nvme0", 00:38:01.877 "trtype": "tcp", 00:38:01.877 "traddr": "127.0.0.1", 00:38:01.877 "adrfam": "ipv4", 00:38:01.877 "trsvcid": "4420", 00:38:01.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:01.877 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:01.877 "prchk_reftag": false, 00:38:01.877 "prchk_guard": false, 00:38:01.877 "hdgst": false, 00:38:01.877 "ddgst": false, 00:38:01.877 "psk": "key0", 00:38:01.877 "allow_unrecognized_csi": false, 00:38:01.877 "method": "bdev_nvme_attach_controller", 00:38:01.877 "req_id": 1 00:38:01.877 } 00:38:01.877 Got JSON-RPC error response 00:38:01.877 response: 00:38:01.877 { 00:38:01.877 "code": -19, 00:38:01.877 "message": "No such device" 00:38:01.877 } 00:38:01.877 13:45:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:01.877 13:45:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:01.877 13:45:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:01.877 13:45:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:01.877 13:45:48 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:01.877 13:45:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:02.139 13:45:48 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:02.139 13:45:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:02.139 13:45:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:02.139 13:45:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:02.139 13:45:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:02.139 13:45:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:02.139 13:45:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NStHkNMgIS 00:38:02.139 13:45:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:02.139 13:45:48 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:02.139 13:45:48 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:02.139 13:45:48 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:02.139 13:45:48 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:02.139 13:45:48 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:02.139 13:45:48 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:02.139 13:45:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NStHkNMgIS 00:38:02.139 13:45:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NStHkNMgIS 00:38:02.139 13:45:48 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.NStHkNMgIS 00:38:02.139 13:45:48 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NStHkNMgIS 00:38:02.139 13:45:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NStHkNMgIS 00:38:02.139 13:45:48 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:02.139 13:45:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:02.400 nvme0n1 00:38:02.400 13:45:48 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:02.400 13:45:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:02.400 13:45:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:02.400 13:45:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:02.400 13:45:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:02.400 13:45:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:02.661 13:45:49 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:02.661 13:45:49 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:02.661 13:45:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:02.922 13:45:49 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:02.922 13:45:49 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:02.922 13:45:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:02.922 13:45:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:02.922 13:45:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:02.922 13:45:49 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:02.922 13:45:49 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:02.922 13:45:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:02.922 13:45:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:02.922 13:45:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:02.922 13:45:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:02.923 13:45:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:03.184 13:45:49 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:03.184 13:45:49 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:03.184 13:45:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:03.445 13:45:49 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:03.445 13:45:49 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:03.445 13:45:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:03.445 13:45:50 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:03.445 13:45:50 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NStHkNMgIS 00:38:03.445 13:45:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NStHkNMgIS 00:38:03.707 13:45:50 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BNld8jUppJ 00:38:03.707 13:45:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BNld8jUppJ 00:38:03.968 13:45:50 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:03.968 13:45:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:04.229 nvme0n1 00:38:04.229 13:45:50 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:04.229 13:45:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:04.491 13:45:50 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:04.491 "subsystems": [ 00:38:04.491 { 00:38:04.491 "subsystem": "keyring", 00:38:04.491 "config": [ 00:38:04.491 { 00:38:04.491 "method": "keyring_file_add_key", 00:38:04.491 "params": { 00:38:04.491 "name": "key0", 00:38:04.491 "path": "/tmp/tmp.NStHkNMgIS" 00:38:04.491 } 00:38:04.491 }, 00:38:04.491 { 00:38:04.491 "method": "keyring_file_add_key", 00:38:04.491 "params": { 00:38:04.491 "name": "key1", 00:38:04.492 "path": "/tmp/tmp.BNld8jUppJ" 00:38:04.492 } 00:38:04.492 } 00:38:04.492 ] 00:38:04.492 }, 00:38:04.492 { 00:38:04.492 "subsystem": "iobuf", 00:38:04.492 "config": [ 00:38:04.492 { 00:38:04.492 "method": "iobuf_set_options", 00:38:04.492 "params": { 00:38:04.492 "small_pool_count": 8192, 00:38:04.492 "large_pool_count": 1024, 00:38:04.492 "small_bufsize": 8192, 00:38:04.492 "large_bufsize": 135168, 00:38:04.492 "enable_numa": false 00:38:04.492 } 00:38:04.492 } 00:38:04.492 ] 00:38:04.492 }, 00:38:04.492 { 00:38:04.492 "subsystem": "sock", 00:38:04.492 "config": [ 00:38:04.492 { 00:38:04.492 "method": "sock_set_default_impl", 00:38:04.492 "params": { 00:38:04.492 "impl_name": "posix" 00:38:04.492 } 00:38:04.492 }, 00:38:04.492 { 00:38:04.492 "method": "sock_impl_set_options", 00:38:04.492 "params": { 00:38:04.492 "impl_name": "ssl", 00:38:04.492 "recv_buf_size": 4096, 00:38:04.492 "send_buf_size": 4096, 00:38:04.492 "enable_recv_pipe": true, 00:38:04.492 "enable_quickack": false, 00:38:04.492 "enable_placement_id": 0, 00:38:04.492 "enable_zerocopy_send_server": true, 00:38:04.492 "enable_zerocopy_send_client": false, 00:38:04.492 "zerocopy_threshold": 0, 00:38:04.492 "tls_version": 0, 00:38:04.492 "enable_ktls": false 00:38:04.492 } 00:38:04.492 }, 00:38:04.492 { 00:38:04.492 "method": "sock_impl_set_options", 00:38:04.492 "params": { 00:38:04.492 "impl_name": "posix", 00:38:04.492 "recv_buf_size": 2097152, 00:38:04.492 "send_buf_size": 2097152, 00:38:04.492 "enable_recv_pipe": true, 00:38:04.492 "enable_quickack": false, 00:38:04.492 "enable_placement_id": 0, 00:38:04.492 "enable_zerocopy_send_server": true, 00:38:04.492 "enable_zerocopy_send_client": false, 00:38:04.492 "zerocopy_threshold": 0, 00:38:04.492 "tls_version": 0, 00:38:04.492 "enable_ktls": false 00:38:04.492 } 00:38:04.492 } 00:38:04.492 ] 00:38:04.492 }, 00:38:04.492 { 00:38:04.492 "subsystem": "vmd", 00:38:04.492 "config": [] 00:38:04.492 }, 00:38:04.492 { 00:38:04.492 "subsystem": "accel", 00:38:04.492 "config": [ 00:38:04.492 { 00:38:04.492 "method": "accel_set_options", 00:38:04.492 "params": { 00:38:04.492 "small_cache_size": 128, 00:38:04.492 "large_cache_size": 16, 00:38:04.492 "task_count": 2048, 00:38:04.492 "sequence_count": 2048, 00:38:04.492 "buf_count": 2048 00:38:04.492 } 00:38:04.492 } 00:38:04.492 ] 00:38:04.492 }, 00:38:04.492 { 00:38:04.492 "subsystem": "bdev", 00:38:04.492 "config": [ 00:38:04.492 { 00:38:04.492 "method": "bdev_set_options", 00:38:04.492 "params": { 00:38:04.492 "bdev_io_pool_size": 65535, 00:38:04.492 "bdev_io_cache_size": 256, 00:38:04.492 "bdev_auto_examine": true, 00:38:04.492 "iobuf_small_cache_size": 128, 00:38:04.492 "iobuf_large_cache_size": 16 00:38:04.492 } 00:38:04.492 }, 00:38:04.492 { 00:38:04.492 "method": "bdev_raid_set_options", 00:38:04.492 "params": { 00:38:04.492 "process_window_size_kb": 1024, 00:38:04.492 "process_max_bandwidth_mb_sec": 0 00:38:04.492 } 00:38:04.492 }, 00:38:04.492 { 00:38:04.492 "method": "bdev_iscsi_set_options", 00:38:04.492 "params": { 00:38:04.492 "timeout_sec": 30 00:38:04.492 } 00:38:04.492 }, 00:38:04.492 { 00:38:04.492 "method": "bdev_nvme_set_options", 00:38:04.492 "params": { 00:38:04.492 "action_on_timeout": "none", 00:38:04.492 "timeout_us": 0, 00:38:04.492 "timeout_admin_us": 0, 00:38:04.492 "keep_alive_timeout_ms": 10000, 00:38:04.492 "arbitration_burst": 0, 00:38:04.492 "low_priority_weight": 0, 00:38:04.492 "medium_priority_weight": 0, 00:38:04.492 "high_priority_weight": 0, 00:38:04.492 "nvme_adminq_poll_period_us": 10000, 00:38:04.492 "nvme_ioq_poll_period_us": 0, 00:38:04.492 "io_queue_requests": 512, 00:38:04.492 "delay_cmd_submit": true, 00:38:04.492 "transport_retry_count": 4, 00:38:04.492 "bdev_retry_count": 3, 00:38:04.492 "transport_ack_timeout": 0, 00:38:04.492 "ctrlr_loss_timeout_sec": 0, 00:38:04.492 "reconnect_delay_sec": 0, 00:38:04.492 "fast_io_fail_timeout_sec": 0, 00:38:04.492 "disable_auto_failback": false, 00:38:04.492 "generate_uuids": false, 00:38:04.492 "transport_tos": 0, 00:38:04.492 "nvme_error_stat": false, 00:38:04.492 "rdma_srq_size": 0, 00:38:04.492 "io_path_stat": false, 00:38:04.492 "allow_accel_sequence": false, 00:38:04.492 "rdma_max_cq_size": 0, 00:38:04.492 "rdma_cm_event_timeout_ms": 0, 00:38:04.492 "dhchap_digests": [ 00:38:04.492 "sha256", 00:38:04.492 "sha384", 00:38:04.492 "sha512" 00:38:04.492 ], 00:38:04.492 "dhchap_dhgroups": [ 00:38:04.492 "null", 00:38:04.492 "ffdhe2048", 00:38:04.492 "ffdhe3072", 00:38:04.492 "ffdhe4096", 00:38:04.492 "ffdhe6144", 00:38:04.492 "ffdhe8192" 00:38:04.492 ] 00:38:04.492 } 00:38:04.492 }, 00:38:04.492 { 00:38:04.492 "method": "bdev_nvme_attach_controller", 00:38:04.492 "params": { 00:38:04.492 "name": "nvme0", 00:38:04.492 "trtype": "TCP", 00:38:04.492 "adrfam": "IPv4", 00:38:04.492 "traddr": "127.0.0.1", 00:38:04.492 "trsvcid": "4420", 00:38:04.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:04.492 "prchk_reftag": false, 00:38:04.492 "prchk_guard": false, 00:38:04.492 "ctrlr_loss_timeout_sec": 0, 00:38:04.493 "reconnect_delay_sec": 0, 00:38:04.493 "fast_io_fail_timeout_sec": 0, 00:38:04.493 "psk": "key0", 00:38:04.493 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:04.493 "hdgst": false, 00:38:04.493 "ddgst": false, 00:38:04.493 "multipath": "multipath" 00:38:04.493 } 00:38:04.493 }, 00:38:04.493 { 00:38:04.493 "method": "bdev_nvme_set_hotplug", 00:38:04.493 "params": { 00:38:04.493 "period_us": 100000, 00:38:04.493 "enable": false 00:38:04.493 } 00:38:04.493 }, 00:38:04.493 { 00:38:04.493 "method": "bdev_wait_for_examine" 00:38:04.493 } 00:38:04.493 ] 00:38:04.493 }, 00:38:04.493 { 00:38:04.493 "subsystem": "nbd", 00:38:04.493 "config": [] 00:38:04.493 } 00:38:04.493 ] 00:38:04.493 }' 00:38:04.493 13:45:50 keyring_file -- keyring/file.sh@115 -- # killprocess 2482009 00:38:04.493 13:45:50 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2482009 ']' 00:38:04.493 13:45:50 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2482009 00:38:04.493 13:45:50 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:04.493 13:45:50 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:04.493 13:45:50 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2482009 00:38:04.493 13:45:51 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:04.493 13:45:51 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:04.493 13:45:51 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2482009' 00:38:04.493 killing process with pid 2482009 00:38:04.493 13:45:51 keyring_file -- common/autotest_common.sh@973 -- # kill 2482009 00:38:04.493 Received shutdown signal, test time was about 1.000000 seconds 00:38:04.493 00:38:04.493 Latency(us) 00:38:04.493 [2024-12-06T12:45:51.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.493 [2024-12-06T12:45:51.152Z] =================================================================================================================== 00:38:04.493 [2024-12-06T12:45:51.152Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:04.493 13:45:51 keyring_file -- common/autotest_common.sh@978 -- # wait 2482009 00:38:04.493 13:45:51 keyring_file -- keyring/file.sh@118 -- # bperfpid=2483815 00:38:04.493 13:45:51 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2483815 /var/tmp/bperf.sock 00:38:04.493 13:45:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2483815 ']' 00:38:04.493 13:45:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:04.493 13:45:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:04.493 13:45:51 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:04.493 13:45:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:04.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:04.493 13:45:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:04.493 13:45:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:04.493 13:45:51 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:04.493 "subsystems": [ 00:38:04.493 { 00:38:04.493 "subsystem": "keyring", 00:38:04.493 "config": [ 00:38:04.493 { 00:38:04.493 "method": "keyring_file_add_key", 00:38:04.493 "params": { 00:38:04.493 "name": "key0", 00:38:04.493 "path": "/tmp/tmp.NStHkNMgIS" 00:38:04.493 } 00:38:04.493 }, 00:38:04.493 { 00:38:04.493 "method": "keyring_file_add_key", 00:38:04.493 "params": { 00:38:04.493 "name": "key1", 00:38:04.493 "path": "/tmp/tmp.BNld8jUppJ" 00:38:04.493 } 00:38:04.493 } 00:38:04.493 ] 00:38:04.493 }, 00:38:04.493 { 00:38:04.493 "subsystem": "iobuf", 00:38:04.493 "config": [ 00:38:04.493 { 00:38:04.493 "method": "iobuf_set_options", 00:38:04.493 "params": { 00:38:04.493 "small_pool_count": 8192, 00:38:04.493 "large_pool_count": 1024, 00:38:04.493 "small_bufsize": 8192, 00:38:04.493 "large_bufsize": 135168, 00:38:04.493 "enable_numa": false 00:38:04.493 } 00:38:04.493 } 00:38:04.493 ] 00:38:04.493 }, 00:38:04.493 { 00:38:04.493 "subsystem": "sock", 00:38:04.493 "config": [ 00:38:04.493 { 00:38:04.493 "method": "sock_set_default_impl", 00:38:04.493 "params": { 00:38:04.493 "impl_name": "posix" 00:38:04.493 } 00:38:04.493 }, 00:38:04.493 { 00:38:04.493 "method": "sock_impl_set_options", 00:38:04.493 "params": { 00:38:04.493 "impl_name": "ssl", 00:38:04.493 "recv_buf_size": 4096, 00:38:04.493 "send_buf_size": 4096, 00:38:04.493 "enable_recv_pipe": true, 00:38:04.493 "enable_quickack": false, 00:38:04.493 "enable_placement_id": 0, 00:38:04.493 "enable_zerocopy_send_server": true, 00:38:04.493 "enable_zerocopy_send_client": false, 00:38:04.493 "zerocopy_threshold": 0, 00:38:04.493 "tls_version": 0, 00:38:04.493 "enable_ktls": false 00:38:04.493 } 00:38:04.493 }, 00:38:04.493 { 00:38:04.493 "method": "sock_impl_set_options", 00:38:04.493 "params": { 00:38:04.493 "impl_name": "posix", 00:38:04.493 "recv_buf_size": 2097152, 00:38:04.493 "send_buf_size": 2097152, 00:38:04.493 "enable_recv_pipe": true, 00:38:04.493 "enable_quickack": false, 00:38:04.493 "enable_placement_id": 0, 00:38:04.493 "enable_zerocopy_send_server": true, 00:38:04.493 "enable_zerocopy_send_client": false, 00:38:04.493 "zerocopy_threshold": 0, 00:38:04.493 "tls_version": 0, 00:38:04.493 "enable_ktls": false 00:38:04.493 } 00:38:04.493 } 00:38:04.493 ] 00:38:04.493 }, 00:38:04.493 { 00:38:04.493 "subsystem": "vmd", 00:38:04.493 "config": [] 00:38:04.493 }, 00:38:04.493 { 00:38:04.493 "subsystem": "accel", 00:38:04.493 "config": [ 00:38:04.493 { 00:38:04.494 "method": "accel_set_options", 00:38:04.494 "params": { 00:38:04.494 "small_cache_size": 128, 00:38:04.494 "large_cache_size": 16, 00:38:04.494 "task_count": 2048, 00:38:04.494 "sequence_count": 2048, 00:38:04.494 "buf_count": 2048 00:38:04.494 } 00:38:04.494 } 00:38:04.494 ] 00:38:04.494 }, 00:38:04.494 { 00:38:04.494 "subsystem": "bdev", 00:38:04.494 "config": [ 00:38:04.494 { 00:38:04.494 "method": "bdev_set_options", 00:38:04.494 "params": { 00:38:04.494 "bdev_io_pool_size": 65535, 00:38:04.494 "bdev_io_cache_size": 256, 00:38:04.494 "bdev_auto_examine": true, 00:38:04.494 "iobuf_small_cache_size": 128, 00:38:04.494 "iobuf_large_cache_size": 16 00:38:04.494 } 00:38:04.494 }, 00:38:04.494 { 00:38:04.494 "method": "bdev_raid_set_options", 00:38:04.494 "params": { 00:38:04.494 "process_window_size_kb": 1024, 00:38:04.494 "process_max_bandwidth_mb_sec": 0 00:38:04.494 } 00:38:04.494 }, 00:38:04.494 { 00:38:04.494 "method": "bdev_iscsi_set_options", 00:38:04.494 "params": { 00:38:04.494 "timeout_sec": 30 00:38:04.494 } 00:38:04.494 }, 00:38:04.494 { 00:38:04.494 "method": "bdev_nvme_set_options", 00:38:04.494 "params": { 00:38:04.494 "action_on_timeout": "none", 00:38:04.494 "timeout_us": 0, 00:38:04.494 "timeout_admin_us": 0, 00:38:04.494 "keep_alive_timeout_ms": 10000, 00:38:04.494 "arbitration_burst": 0, 00:38:04.494 "low_priority_weight": 0, 00:38:04.494 "medium_priority_weight": 0, 00:38:04.494 "high_priority_weight": 0, 00:38:04.494 "nvme_adminq_poll_period_us": 10000, 00:38:04.494 "nvme_ioq_poll_period_us": 0, 00:38:04.494 "io_queue_requests": 512, 00:38:04.494 "delay_cmd_submit": true, 00:38:04.494 "transport_retry_count": 4, 00:38:04.494 "bdev_retry_count": 3, 00:38:04.494 "transport_ack_timeout": 0, 00:38:04.494 "ctrlr_loss_timeout_sec": 0, 00:38:04.494 "reconnect_delay_sec": 0, 00:38:04.494 "fast_io_fail_timeout_sec": 0, 00:38:04.494 "disable_auto_failback": false, 00:38:04.494 "generate_uuids": false, 00:38:04.494 "transport_tos": 0, 00:38:04.494 "nvme_error_stat": false, 00:38:04.494 "rdma_srq_size": 0, 00:38:04.494 "io_path_stat": false, 00:38:04.494 "allow_accel_sequence": false, 00:38:04.494 "rdma_max_cq_size": 0, 00:38:04.494 "rdma_cm_event_timeout_ms": 0, 00:38:04.494 "dhchap_digests": [ 00:38:04.494 "sha256", 00:38:04.494 "sha384", 00:38:04.494 "sha512" 00:38:04.494 ], 00:38:04.494 "dhchap_dhgroups": [ 00:38:04.494 "null", 00:38:04.494 "ffdhe2048", 00:38:04.494 "ffdhe3072", 00:38:04.494 "ffdhe4096", 00:38:04.494 "ffdhe6144", 00:38:04.494 "ffdhe8192" 00:38:04.494 ] 00:38:04.494 } 00:38:04.494 }, 00:38:04.494 { 00:38:04.494 "method": "bdev_nvme_attach_controller", 00:38:04.494 "params": { 00:38:04.494 "name": "nvme0", 00:38:04.494 "trtype": "TCP", 00:38:04.494 "adrfam": "IPv4", 00:38:04.494 "traddr": "127.0.0.1", 00:38:04.494 "trsvcid": "4420", 00:38:04.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:04.494 "prchk_reftag": false, 00:38:04.494 "prchk_guard": false, 00:38:04.494 "ctrlr_loss_timeout_sec": 0, 00:38:04.494 "reconnect_delay_sec": 0, 00:38:04.494 "fast_io_fail_timeout_sec": 0, 00:38:04.494 "psk": "key0", 00:38:04.494 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:04.494 "hdgst": false, 00:38:04.494 "ddgst": false, 00:38:04.494 "multipath": "multipath" 00:38:04.494 } 00:38:04.494 }, 00:38:04.494 { 00:38:04.494 "method": "bdev_nvme_set_hotplug", 00:38:04.494 "params": { 00:38:04.494 "period_us": 100000, 00:38:04.494 "enable": false 00:38:04.494 } 00:38:04.494 }, 00:38:04.494 { 00:38:04.494 "method": "bdev_wait_for_examine" 00:38:04.494 } 00:38:04.494 ] 00:38:04.494 }, 00:38:04.494 { 00:38:04.494 "subsystem": "nbd", 00:38:04.494 "config": [] 00:38:04.494 } 00:38:04.494 ] 00:38:04.494 }' 00:38:04.754 [2024-12-06 13:45:51.173119] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:38:04.754 [2024-12-06 13:45:51.173177] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483815 ] 00:38:04.755 [2024-12-06 13:45:51.254882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:04.755 [2024-12-06 13:45:51.283681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:05.016 [2024-12-06 13:45:51.427551] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:05.589 13:45:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:05.589 13:45:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:05.589 13:45:51 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:05.589 13:45:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.589 13:45:51 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:05.589 13:45:52 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:05.589 13:45:52 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:05.589 13:45:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.589 13:45:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:05.589 13:45:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.589 13:45:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:05.589 13:45:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.852 13:45:52 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:05.852 13:45:52 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:05.852 13:45:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:05.852 13:45:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.852 13:45:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.852 13:45:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.852 13:45:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:05.852 13:45:52 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:05.852 13:45:52 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:05.852 13:45:52 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:05.852 13:45:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:06.113 13:45:52 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:06.113 13:45:52 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:06.113 13:45:52 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.NStHkNMgIS /tmp/tmp.BNld8jUppJ 00:38:06.113 13:45:52 keyring_file -- keyring/file.sh@20 -- # killprocess 2483815 00:38:06.113 13:45:52 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2483815 ']' 00:38:06.113 13:45:52 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2483815 00:38:06.113 13:45:52 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:06.113 13:45:52 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:06.113 13:45:52 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2483815 00:38:06.113 13:45:52 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:06.114 13:45:52 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:06.114 13:45:52 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2483815' 00:38:06.114 killing process with pid 2483815 00:38:06.114 13:45:52 keyring_file -- common/autotest_common.sh@973 -- # kill 2483815 00:38:06.114 Received shutdown signal, test time was about 1.000000 seconds 00:38:06.114 00:38:06.114 Latency(us) 00:38:06.114 [2024-12-06T12:45:52.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.114 [2024-12-06T12:45:52.773Z] =================================================================================================================== 00:38:06.114 [2024-12-06T12:45:52.773Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:06.114 13:45:52 keyring_file -- common/autotest_common.sh@978 -- # wait 2483815 00:38:06.374 13:45:52 keyring_file -- keyring/file.sh@21 -- # killprocess 2481739 00:38:06.374 13:45:52 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2481739 ']' 00:38:06.374 13:45:52 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2481739 00:38:06.374 13:45:52 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:06.374 13:45:52 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:06.374 13:45:52 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2481739 00:38:06.374 13:45:52 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:06.374 13:45:52 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:06.374 13:45:52 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2481739' 00:38:06.374 killing process with pid 2481739 00:38:06.374 13:45:52 keyring_file -- common/autotest_common.sh@973 -- # kill 2481739 00:38:06.374 13:45:52 keyring_file -- common/autotest_common.sh@978 -- # wait 2481739 00:38:06.635 00:38:06.635 real 0m12.122s 00:38:06.635 user 0m29.216s 00:38:06.635 sys 0m2.753s 00:38:06.635 13:45:53 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:06.635 13:45:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:06.635 ************************************ 00:38:06.635 END TEST keyring_file 00:38:06.635 ************************************ 00:38:06.635 13:45:53 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:06.635 13:45:53 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:06.635 13:45:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:06.635 13:45:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:06.635 13:45:53 -- common/autotest_common.sh@10 -- # set +x 00:38:06.635 ************************************ 00:38:06.635 START TEST keyring_linux 00:38:06.635 ************************************ 00:38:06.635 13:45:53 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:06.635 Joined session keyring: 890281329 00:38:06.635 * Looking for test storage... 00:38:06.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:06.635 13:45:53 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:06.635 13:45:53 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:38:06.635 13:45:53 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:06.897 13:45:53 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:06.897 13:45:53 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:06.897 13:45:53 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:06.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.897 --rc genhtml_branch_coverage=1 00:38:06.897 --rc genhtml_function_coverage=1 00:38:06.897 --rc genhtml_legend=1 00:38:06.897 --rc geninfo_all_blocks=1 00:38:06.897 --rc geninfo_unexecuted_blocks=1 00:38:06.897 00:38:06.897 ' 00:38:06.897 13:45:53 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:06.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.897 --rc genhtml_branch_coverage=1 00:38:06.897 --rc genhtml_function_coverage=1 00:38:06.897 --rc genhtml_legend=1 00:38:06.897 --rc geninfo_all_blocks=1 00:38:06.897 --rc geninfo_unexecuted_blocks=1 00:38:06.897 00:38:06.897 ' 00:38:06.897 13:45:53 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:06.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.897 --rc genhtml_branch_coverage=1 00:38:06.897 --rc genhtml_function_coverage=1 00:38:06.897 --rc genhtml_legend=1 00:38:06.897 --rc geninfo_all_blocks=1 00:38:06.897 --rc geninfo_unexecuted_blocks=1 00:38:06.897 00:38:06.897 ' 00:38:06.897 13:45:53 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:06.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.897 --rc genhtml_branch_coverage=1 00:38:06.897 --rc genhtml_function_coverage=1 00:38:06.897 --rc genhtml_legend=1 00:38:06.897 --rc geninfo_all_blocks=1 00:38:06.897 --rc geninfo_unexecuted_blocks=1 00:38:06.897 00:38:06.897 ' 00:38:06.897 13:45:53 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:06.897 13:45:53 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:06.897 13:45:53 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:06.897 13:45:53 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.897 13:45:53 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.897 13:45:53 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.897 13:45:53 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:06.897 13:45:53 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:06.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:06.897 13:45:53 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:06.897 13:45:53 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:06.897 13:45:53 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:06.897 13:45:53 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:06.897 13:45:53 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:06.897 13:45:53 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:06.897 13:45:53 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:06.897 13:45:53 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:06.897 13:45:53 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:06.897 13:45:53 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:06.897 13:45:53 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:06.897 13:45:53 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:06.897 13:45:53 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:06.897 13:45:53 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:06.897 13:45:53 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:06.897 13:45:53 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:06.897 /tmp/:spdk-test:key0 00:38:06.897 13:45:53 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:06.897 13:45:53 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:06.897 13:45:53 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:06.898 13:45:53 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:06.898 13:45:53 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:06.898 13:45:53 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:06.898 13:45:53 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:06.898 13:45:53 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:06.898 13:45:53 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:06.898 13:45:53 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:06.898 13:45:53 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:06.898 13:45:53 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:06.898 13:45:53 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:06.898 13:45:53 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:06.898 13:45:53 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:06.898 /tmp/:spdk-test:key1 00:38:06.898 13:45:53 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:06.898 13:45:53 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2484253 00:38:06.898 13:45:53 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2484253 00:38:06.898 13:45:53 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2484253 ']' 00:38:06.898 13:45:53 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:06.898 13:45:53 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:06.898 13:45:53 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:06.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:06.898 13:45:53 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:06.898 13:45:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:06.898 [2024-12-06 13:45:53.511449] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:38:06.898 [2024-12-06 13:45:53.511508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484253 ] 00:38:07.158 [2024-12-06 13:45:53.569697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.159 [2024-12-06 13:45:53.599862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.159 13:45:53 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:07.159 13:45:53 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:07.159 13:45:53 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:07.159 13:45:53 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.159 13:45:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:07.159 [2024-12-06 13:45:53.782217] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:07.159 null0 00:38:07.159 [2024-12-06 13:45:53.814269] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:07.159 [2024-12-06 13:45:53.814630] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:07.419 13:45:53 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.419 13:45:53 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:07.419 310472571 00:38:07.419 13:45:53 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:07.419 975022349 00:38:07.419 13:45:53 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2484264 00:38:07.419 13:45:53 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2484264 /var/tmp/bperf.sock 00:38:07.419 13:45:53 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:07.419 13:45:53 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2484264 ']' 00:38:07.419 13:45:53 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:07.419 13:45:53 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:07.419 13:45:53 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:07.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:07.419 13:45:53 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:07.419 13:45:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:07.419 [2024-12-06 13:45:53.891798] Starting SPDK v25.01-pre git sha1 b82e5bf03 / DPDK 24.03.0 initialization... 00:38:07.419 [2024-12-06 13:45:53.891844] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484264 ] 00:38:07.419 [2024-12-06 13:45:53.974547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.419 [2024-12-06 13:45:54.004373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.363 13:45:54 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:08.363 13:45:54 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:08.363 13:45:54 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:08.363 13:45:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:08.363 13:45:54 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:08.363 13:45:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:08.623 13:45:55 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:08.623 13:45:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:08.623 [2024-12-06 13:45:55.217647] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:08.883 nvme0n1 00:38:08.883 13:45:55 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:08.883 13:45:55 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:08.883 13:45:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:08.883 13:45:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:08.883 13:45:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:08.883 13:45:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.883 13:45:55 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:08.883 13:45:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:08.883 13:45:55 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:08.883 13:45:55 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:08.883 13:45:55 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:08.883 13:45:55 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:08.883 13:45:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:09.143 13:45:55 keyring_linux -- keyring/linux.sh@25 -- # sn=310472571 00:38:09.143 13:45:55 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:09.143 13:45:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:09.143 13:45:55 keyring_linux -- keyring/linux.sh@26 -- # [[ 310472571 == \3\1\0\4\7\2\5\7\1 ]] 00:38:09.143 13:45:55 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 310472571 00:38:09.143 13:45:55 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:09.143 13:45:55 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:09.143 Running I/O for 1 seconds... 00:38:10.524 24407.00 IOPS, 95.34 MiB/s 00:38:10.524 Latency(us) 00:38:10.524 [2024-12-06T12:45:57.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:10.524 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:10.524 nvme0n1 : 1.01 24408.59 95.35 0.00 0.00 5228.40 4014.08 9994.24 00:38:10.524 [2024-12-06T12:45:57.183Z] =================================================================================================================== 00:38:10.524 [2024-12-06T12:45:57.183Z] Total : 24408.59 95.35 0.00 0.00 5228.40 4014.08 9994.24 00:38:10.524 { 00:38:10.524 "results": [ 00:38:10.524 { 00:38:10.524 "job": "nvme0n1", 00:38:10.524 "core_mask": "0x2", 00:38:10.524 "workload": "randread", 00:38:10.524 "status": "finished", 00:38:10.524 "queue_depth": 128, 00:38:10.524 "io_size": 4096, 00:38:10.524 "runtime": 1.00522, 00:38:10.524 "iops": 24408.587174946777, 00:38:10.524 "mibps": 95.34604365213585, 00:38:10.524 "io_failed": 0, 00:38:10.524 "io_timeout": 0, 00:38:10.524 "avg_latency_us": 5228.4008781654165, 00:38:10.524 "min_latency_us": 4014.08, 00:38:10.524 "max_latency_us": 9994.24 00:38:10.524 } 00:38:10.524 ], 00:38:10.524 "core_count": 1 00:38:10.524 } 00:38:10.524 13:45:56 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:10.524 13:45:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:10.524 13:45:57 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:10.524 13:45:57 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:10.524 13:45:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:10.524 13:45:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:10.524 13:45:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:10.524 13:45:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:10.783 13:45:57 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:10.783 13:45:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:10.783 13:45:57 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:10.783 13:45:57 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:10.783 13:45:57 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:10.783 13:45:57 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:10.783 13:45:57 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:10.783 13:45:57 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:10.783 13:45:57 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:10.783 13:45:57 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:10.783 13:45:57 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:10.783 13:45:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:10.783 [2024-12-06 13:45:57.373564] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:10.783 [2024-12-06 13:45:57.374109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1759620 (107): Transport endpoint is not connected 00:38:10.783 [2024-12-06 13:45:57.375106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1759620 (9): Bad file descriptor 00:38:10.783 [2024-12-06 13:45:57.376108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:10.783 [2024-12-06 13:45:57.376116] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:10.783 [2024-12-06 13:45:57.376121] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:10.783 [2024-12-06 13:45:57.376128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:10.783 request: 00:38:10.783 { 00:38:10.784 "name": "nvme0", 00:38:10.784 "trtype": "tcp", 00:38:10.784 "traddr": "127.0.0.1", 00:38:10.784 "adrfam": "ipv4", 00:38:10.784 "trsvcid": "4420", 00:38:10.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:10.784 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:10.784 "prchk_reftag": false, 00:38:10.784 "prchk_guard": false, 00:38:10.784 "hdgst": false, 00:38:10.784 "ddgst": false, 00:38:10.784 "psk": ":spdk-test:key1", 00:38:10.784 "allow_unrecognized_csi": false, 00:38:10.784 "method": "bdev_nvme_attach_controller", 00:38:10.784 "req_id": 1 00:38:10.784 } 00:38:10.784 Got JSON-RPC error response 00:38:10.784 response: 00:38:10.784 { 00:38:10.784 "code": -5, 00:38:10.784 "message": "Input/output error" 00:38:10.784 } 00:38:10.784 13:45:57 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:10.784 13:45:57 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:10.784 13:45:57 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:10.784 13:45:57 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@33 -- # sn=310472571 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 310472571 00:38:10.784 1 links removed 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@33 -- # sn=975022349 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 975022349 00:38:10.784 1 links removed 00:38:10.784 13:45:57 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2484264 00:38:10.784 13:45:57 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2484264 ']' 00:38:10.784 13:45:57 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2484264 00:38:10.784 13:45:57 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:10.784 13:45:57 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:10.784 13:45:57 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2484264 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2484264' 00:38:11.043 killing process with pid 2484264 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@973 -- # kill 2484264 00:38:11.043 Received shutdown signal, test time was about 1.000000 seconds 00:38:11.043 00:38:11.043 Latency(us) 00:38:11.043 [2024-12-06T12:45:57.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:11.043 [2024-12-06T12:45:57.702Z] =================================================================================================================== 00:38:11.043 [2024-12-06T12:45:57.702Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@978 -- # wait 2484264 00:38:11.043 13:45:57 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2484253 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2484253 ']' 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2484253 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2484253 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2484253' 00:38:11.043 killing process with pid 2484253 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@973 -- # kill 2484253 00:38:11.043 13:45:57 keyring_linux -- common/autotest_common.sh@978 -- # wait 2484253 00:38:11.303 00:38:11.303 real 0m4.697s 00:38:11.303 user 0m9.226s 00:38:11.303 sys 0m1.344s 00:38:11.303 13:45:57 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:11.303 13:45:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:11.303 ************************************ 00:38:11.303 END TEST keyring_linux 00:38:11.303 ************************************ 00:38:11.303 13:45:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:11.303 13:45:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:11.303 13:45:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:11.303 13:45:57 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:11.303 13:45:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:11.303 13:45:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:11.303 13:45:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:11.303 13:45:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:11.303 13:45:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:11.303 13:45:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:11.303 13:45:57 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:11.303 13:45:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:11.303 13:45:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:11.303 13:45:57 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:11.303 13:45:57 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:11.303 13:45:57 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:11.303 13:45:57 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:11.303 13:45:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:11.303 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:38:11.303 13:45:57 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:11.303 13:45:57 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:11.303 13:45:57 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:11.303 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:38:19.441 INFO: APP EXITING 00:38:19.441 INFO: killing all VMs 00:38:19.441 INFO: killing vhost app 00:38:19.441 INFO: EXIT DONE 00:38:22.739 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:22.739 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:22.739 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:22.739 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:22.739 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:22.739 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:22.739 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:22.739 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:22.739 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:22.739 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:22.739 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:22.739 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:23.000 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:23.000 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:23.000 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:23.000 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:23.000 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:27.208 Cleaning 00:38:27.208 Removing: /var/run/dpdk/spdk0/config 00:38:27.208 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:27.208 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:27.208 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:27.208 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:27.208 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:27.208 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:27.208 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:27.208 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:27.208 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:27.208 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:27.208 Removing: /var/run/dpdk/spdk1/config 00:38:27.208 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:27.208 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:27.208 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:27.208 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:27.208 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:27.208 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:27.208 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:27.208 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:27.208 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:27.208 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:27.208 Removing: /var/run/dpdk/spdk2/config 00:38:27.208 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:27.208 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:27.208 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:27.208 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:27.208 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:27.208 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:27.208 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:27.208 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:27.208 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:27.208 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:27.208 Removing: /var/run/dpdk/spdk3/config 00:38:27.208 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:27.208 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:27.208 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:27.208 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:27.208 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:27.208 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:27.208 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:27.208 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:27.208 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:27.208 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:27.208 Removing: /var/run/dpdk/spdk4/config 00:38:27.208 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:27.208 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:27.208 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:27.208 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:27.209 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:27.209 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:27.209 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:27.209 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:27.209 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:27.209 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:27.209 Removing: /dev/shm/bdev_svc_trace.1 00:38:27.209 Removing: /dev/shm/nvmf_trace.0 00:38:27.209 Removing: /dev/shm/spdk_tgt_trace.pid1907750 00:38:27.209 Removing: /var/run/dpdk/spdk0 00:38:27.209 Removing: /var/run/dpdk/spdk1 00:38:27.209 Removing: /var/run/dpdk/spdk2 00:38:27.209 Removing: /var/run/dpdk/spdk3 00:38:27.209 Removing: /var/run/dpdk/spdk4 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1906073 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1907750 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1908357 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1909466 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1909738 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1910911 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1911134 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1911499 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1912482 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1913306 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1913697 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1914096 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1914505 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1914968 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1915380 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1915766 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1916158 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1917328 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1920810 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1921173 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1921542 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1921605 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1922069 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1922256 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1922643 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1922967 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1923269 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1923352 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1923711 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1923722 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1924315 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1924531 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1924928 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1929656 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1934845 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1947141 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1947877 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1952984 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1953476 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1958698 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1965887 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1969470 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1982188 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1993058 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1995078 00:38:27.209 Removing: /var/run/dpdk/spdk_pid1996348 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2017079 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2022372 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2079597 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2085986 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2093128 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2101081 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2101084 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2102084 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2103090 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2104094 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2104767 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2104769 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2105107 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2105116 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2105132 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2106170 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2107183 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2108278 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2108887 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2109004 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2109252 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2110641 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2111988 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2122431 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2155606 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2161263 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2163634 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2165758 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2166103 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2166450 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2166765 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2167511 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2169530 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2170784 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2171325 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2174029 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2174734 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2175450 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2180519 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2187213 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2187215 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2187216 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2191907 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2202125 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2206906 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2214787 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2216294 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2218134 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2219668 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2225357 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2230701 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2235659 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2244853 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2244941 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2249996 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2250327 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2250467 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2251000 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2251018 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2256573 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2257226 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2262678 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2265878 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2273016 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2279568 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2289574 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2298189 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2298216 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2320557 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2321842 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2322575 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2323263 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2324318 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2325014 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2325698 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2326528 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2331752 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2332062 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2339128 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2339507 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2345964 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2350999 00:38:27.209 Removing: /var/run/dpdk/spdk_pid2362619 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2363300 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2368489 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2368910 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2374287 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2381124 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2384129 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2396461 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2407040 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2408970 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2410042 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2430128 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2434848 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2438053 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2445796 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2445801 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2451671 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2454044 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2456395 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2457722 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2460099 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2461620 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2471664 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2472225 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2472822 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2476088 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2476766 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2477276 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2481739 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2482009 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2483815 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2484253 00:38:27.472 Removing: /var/run/dpdk/spdk_pid2484264 00:38:27.472 Clean 00:38:27.472 13:46:14 -- common/autotest_common.sh@1453 -- # return 0 00:38:27.472 13:46:14 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:27.472 13:46:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:27.472 13:46:14 -- common/autotest_common.sh@10 -- # set +x 00:38:27.472 13:46:14 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:27.472 13:46:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:27.472 13:46:14 -- common/autotest_common.sh@10 -- # set +x 00:38:27.734 13:46:14 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:27.734 13:46:14 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:27.734 13:46:14 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:27.734 13:46:14 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:27.734 13:46:14 -- spdk/autotest.sh@398 -- # hostname 00:38:27.734 13:46:14 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:27.734 geninfo: WARNING: invalid characters removed from testname! 00:38:54.419 13:46:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:56.961 13:46:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:58.872 13:46:45 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:00.252 13:46:46 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:02.163 13:46:48 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:04.076 13:46:50 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:05.990 13:46:52 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:05.990 13:46:52 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:05.990 13:46:52 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:05.990 13:46:52 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:05.990 13:46:52 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:05.990 13:46:52 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:05.990 + [[ -n 1820907 ]] 00:39:05.990 + sudo kill 1820907 00:39:06.002 [Pipeline] } 00:39:06.018 [Pipeline] // stage 00:39:06.024 [Pipeline] } 00:39:06.039 [Pipeline] // timeout 00:39:06.044 [Pipeline] } 00:39:06.060 [Pipeline] // catchError 00:39:06.066 [Pipeline] } 00:39:06.086 [Pipeline] // wrap 00:39:06.092 [Pipeline] } 00:39:06.108 [Pipeline] // catchError 00:39:06.119 [Pipeline] stage 00:39:06.122 [Pipeline] { (Epilogue) 00:39:06.138 [Pipeline] catchError 00:39:06.141 [Pipeline] { 00:39:06.155 [Pipeline] echo 00:39:06.158 Cleanup processes 00:39:06.166 [Pipeline] sh 00:39:06.459 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:06.459 2497274 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:06.475 [Pipeline] sh 00:39:06.765 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:06.765 ++ grep -v 'sudo pgrep' 00:39:06.765 ++ awk '{print $1}' 00:39:06.765 + sudo kill -9 00:39:06.765 + true 00:39:06.779 [Pipeline] sh 00:39:07.069 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:19.308 [Pipeline] sh 00:39:19.594 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:19.595 Artifacts sizes are good 00:39:19.608 [Pipeline] archiveArtifacts 00:39:19.615 Archiving artifacts 00:39:19.755 [Pipeline] sh 00:39:20.040 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:20.053 [Pipeline] cleanWs 00:39:20.062 [WS-CLEANUP] Deleting project workspace... 00:39:20.062 [WS-CLEANUP] Deferred wipeout is used... 00:39:20.069 [WS-CLEANUP] done 00:39:20.070 [Pipeline] } 00:39:20.087 [Pipeline] // catchError 00:39:20.099 [Pipeline] sh 00:39:20.415 + logger -p user.info -t JENKINS-CI 00:39:20.458 [Pipeline] } 00:39:20.471 [Pipeline] // stage 00:39:20.477 [Pipeline] } 00:39:20.492 [Pipeline] // node 00:39:20.498 [Pipeline] End of Pipeline 00:39:20.529 Finished: SUCCESS